CH-47D Rotating System Fault Sensing for Condition Based Maintenance
2011-03-01
replacement. This research seeks to create an analytical model in the Rotorcraft Comprehensive Analysis System which will enable the identifica- tion of...answer my many questions. Without your assistance and that of Dr. Jon Keller and Mr. Clayton Kachelle at AMRDEC, the Rotorcraft Comprehensive Analysis...20 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Rotorcraft Comprehensive Analysis
Requirements for Next Generation Comprehensive Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne; Data, Anubhav
2008-01-01
The unique demands of rotorcraft aeromechanics analysis have led to the development of software tools that are described as comprehensive analyses. The next generation of rotorcraft comprehensive analyses will be driven and enabled by the tremendous capabilities of high performance computing, particularly modular and scaleable software executed on multiple cores. Development of a comprehensive analysis based on high performance computing both demands and permits a new analysis architecture. This paper describes a vision of the requirements for this next generation of comprehensive analyses of rotorcraft. The requirements are described and substantiated for what must be included and justification provided for what should be excluded. With this guide, a path to the next generation code can be found.
Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu
2012-06-08
Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.
Salivo, Simona; Beccaria, Marco; Sullini, Giuseppe; Tranchida, Peter Q; Dugo, Paola; Mondello, Luigi
2015-01-01
The main focus of the present research is the analysis of the unsaponifiable lipid fraction of human plasma by using data derived from comprehensive two-dimensional gas chromatography with dual quadrupole mass spectrometry and flame ionization detection. This approach enabled us to attain both mass spectral information and analyte percentage data. Furthermore, gas chromatography coupled with high-resolution time-of-flight mass spectrometry was used to increase the reliability of identification of several unsaponifiable lipid constituents. The synergism between both the high-resolution gas chromatography and mass spectrometry processes enabled us to attain a more in-depth knowledge of the unsaponifiable fraction of human plasma. Additionally, information was attained on the fatty acid and triacylglycerol composition of the plasma samples, subjected to investigation by using comprehensive two-dimensional gas chromatography with dual quadrupole mass spectrometry and flame ionization detection and high-performance liquid chromatography with atmospheric pressure chemical ionization quadrupole mass spectrometry, respectively. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
ERIC Educational Resources Information Center
Wood, Sarah G.; Moxley, Jerad H.; Tighe, Elizabeth L.; Wagner, Richard K.
2018-01-01
Text-to-speech and related read-aloud tools are being widely implemented in an attempt to assist students' reading comprehension skills. Read-aloud software, including text-to-speech, is used to translate written text into spoken text, enabling one to listen to written text while reading along. It is not clear how effective text-to-speech is at…
Fan, Yannan; Siklenka, Keith; Arora, Simran K.; Ribeiro, Paula; Kimmins, Sarah; Xia, Jianguo
2016-01-01
MicroRNAs (miRNAs) can regulate nearly all biological processes and their dysregulation is implicated in various complex diseases and pathological conditions. Recent years have seen a growing number of functional studies of miRNAs using high-throughput experimental technologies, which have produced a large amount of high-quality data regarding miRNA target genes and their interactions with small molecules, long non-coding RNAs, epigenetic modifiers, disease associations, etc. These rich sets of information have enabled the creation of comprehensive networks linking miRNAs with various biologically important entities to shed light on their collective functions and regulatory mechanisms. Here, we introduce miRNet, an easy-to-use web-based tool that offers statistical, visual and network-based approaches to help researchers understand miRNAs functions and regulatory mechanisms. The key features of miRNet include: (i) a comprehensive knowledge base integrating high-quality miRNA-target interaction data from 11 databases; (ii) support for differential expression analysis of data from microarray, RNA-seq and quantitative PCR; (iii) implementation of a flexible interface for data filtering, refinement and customization during network creation; (iv) a powerful fully featured network visualization system coupled with enrichment analysis. miRNet offers a comprehensive tool suite to enable statistical analysis and functional interpretation of various data generated from current miRNA studies. miRNet is freely available at http://www.mirnet.ca. PMID:27105848
Lima, Margarete Maria de; Reibnitz, Kenya Schmidt; Kloh, Daiana; Martini, Jussara Gue; Backes, Vania Marli Schubert
2017-11-27
To analyze how the indications of comprehensiveness translate into the teaching-learning process in a nursing undergraduate course. Qualitative case study carried out with professors of a Nursing Undergraduate Course. Data collection occurred through documentary analysis, non-participant observation and individual interviews. Data analysis was guided from an analytical matrix following the steps of the operative proposal. Eight professors participated in the study. Some indications of comprehensiveness such as dialogue, listening, mutual respect, bonding and welcoming are present in the daily life of some professors. The indications of comprehensiveness are applied by some professors in the pedagogical relationship. The results refer to the Comprehensiveness of teaching-learning in a single and double loop model, and in this the professor and the student assume an open posture for new possibilities in the teaching-learning process. Comprehensiveness, as it is recognized as a pedagogical principle, allows the disruption of a professor-centered teaching and advances in collective learning, enabling the professor and student to create their own design anchored in a reflective process about their practices and the reality found in the health services.
Báscolo, Ernesto Pablo; Yavich, Natalia; Denis, Jean-Louis
2016-01-01
Abstract Background Primary health care (PHC)-based reforms have had different results in Latin America. Little attention has been paid to the enablers of collective action capacities required to produce a comprehensive PHC approach. Objective To analyse the enablers of collective action capacities to transform health systems towards a comprehensive PHC approach in Latin American PHC-based reforms. Methods We conducted a longitudinal, retrospective case study of three municipal PHC-based reforms in Bolivia and Argentina. We used multiple data sources and methodologies: document review; interviews with policymakers, managers and practitioners; and household and services surveys. We used temporal bracketing to analyse how the dynamic of interaction between the institutional reform process and the collective action characteristics enabled or hindered the enablers of collective action capacities required to produce the envisioned changes. Results The institutional structuring dynamics and collective action capacities were different in each case. In Cochabamba, there was an ‘interrupted’ structuring process that achieved the establishment of a primary level with a selective PHC approach. In Vicente López, there was a ‘path-dependency’ structuring process that permitted the consolidation of a ‘primary care’ approach, but with limited influence in hospitals. In Rosario, there was a ‘dialectic’ structuring process that favoured the development of the capacities needed to consolidate a comprehensive PHC approach that permeates the entire system. Conclusion The institutional change processes achieved the development of a primary health care level with different degrees of consolidation and system-wide influence given how the characteristics of each collective action enabled or hindered the ‘structuring’ processes. PMID:27209640
2008-10-23
for Public Release, Distribution Unlimited, GDLS approved, log 2008-98, dated 10/13/08 Analysis Analysis from DFSS axiomatic design methods indicate the...solutions AA enables a comprehensive analysis across different force configurations and dynamic situations October 26, 2006 slide 25 Approved for public ...analyzed by Software Engineering Institute. Analysis results reviewed by NDIA SE Effectiveness Committee. Reports 1. Public NDIA/SEI report released
Davis, Rodeina; Geiger, Bradley; Gutierrez, Alfonso; Heaser, Julie; Veeramani, Dharmaraj
2009-07-01
Radio frequency identification (RFID) can be a key enabler for enhancing productivity and safety of the blood product supply chain. This article describes a systematic approach developed by the RFID Blood Consortium for a comprehensive feasibility and impact assessment of RFID application in blood centre operations. Our comprehensive assessment approach incorporates process-orientated and technological perspectives as well as impact analysis. Assessment of RFID-enabled process redesign is based on generic core processes derived from the three participating blood centres. The technological assessment includes RFID tag readability and performance evaluation, testing of temperature and biological effects of RF energy on blood products, and RFID system architecture design and standards. The scope of this article is limited to blood centre processes (from donation to manufacturing/distribution) for selected mainstream blood products (red blood cells and platelets). Radio frequency identification can help overcome a number of common challenges and process inefficiencies associated with identification and tracking of blood products. High frequency-based RFID technology performs adequately and safely for red blood cell and platelet products. Productivity and quality improvements in RFID-enabled blood centre processes can recoup investment cost in a 4-year payback period. Radio frequency identification application has significant process-orientated and technological implications. It is feasible and economically justifiable to incorporate RFID into blood centre processes.
Measure the color distribution of a cotton sample using image analysis
USDA-ARS?s Scientific Manuscript database
The most commonly used measurement of cotton color is by the colorimeter principal that reports the sample’s color grade. However, the color distribution and variation within the sample are not reported. Obtaining color distributions of cotton samples will enable a more comprehensive evaluation of...
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
Enabling a Comprehensive Teaching Strategy: Video Lectures
ERIC Educational Resources Information Center
Brecht, H. David; Ogilby, Suzanne M.
2008-01-01
This study empirically tests the feasibility and effectiveness of video lectures as a form of video instruction that enables a comprehensive teaching strategy used throughout a traditional classroom course. It examines student use patterns and the videos' effects on student learning, using qualitative and nonparametric statistical analyses of…
Kazarian, Artaches A; Taylor, Mark R; Haddad, Paul R; Nesterenko, Pavel N; Paull, Brett
2013-12-01
The comprehensive separation and detection of hydrophobic and hydrophilic active pharmaceutical ingredients (APIs), their counter-ions (organic, inorganic) and excipients, using a single mixed-mode chromatographic column, and a dual injection approach is presented. Using a mixed-mode Thermo Fisher Acclaim Trinity P1 column, APIs, their counter-ions and possible degradants were first separated using a combination of anion-exchange, cation-exchange and hydrophobic interactions, using a mobile phase consisting of a dual organic modifier/salt concentration gradient. A complementary method was also developed using the same column for the separation of hydrophilic bulk excipients, using hydrophilic interaction liquid chromatography (HILIC) under high organic solvent mobile phase conditions. These two methods were then combined within a single gradient run using dual sample injection, with the first injection at the start of the applied gradient (mixed-mode retention of solutes), followed by a second sample injection at the end of the gradient (HILIC retention of solutes). Detection using both ultraviolet absorbance and refractive index enabled the sensitive detection of APIs and UV-absorbing counter-ions, together with quantitative determination of bulk excipients. The developed approach was applied successfully to the analysis of a dry powder inhalers (Flixotide(®), Spiriva(®)), enabling comprehensive quantification of all APIs and excipients in the sample. Copyright © 2013 Elsevier B.V. All rights reserved.
Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M
2018-03-05
The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative analysis of heterogeneous data types in the development of complex botanicals such as polyphenols for eventual clinical and translational applications.
Tighe, Elizabeth L.; Schatschneider, Christopher
2015-01-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773
Nano-enabled drug delivery: a research profile.
Zhou, Xiao; Porter, Alan L; Robinson, Douglas K R; Shim, Min Suk; Guo, Ying
2014-07-01
Nano-enabled drug delivery (NEDD) systems are rapidly emerging as a key area for nanotechnology application. Understanding the status and developmental prospects of this area around the world is important to determine research priorities, and to evaluate and direct progress. Global research publication and patent databases provide a reservoir of information that can be tapped to provide intelligence for such needs. Here, we present a process to allow for extraction of NEDD-related information from these databases by involving topical experts. This process incorporates in-depth analysis of NEDD literature review papers to identify key subsystems and major topics. We then use these to structure global analysis of NEDD research topical trends and collaborative patterns, inform future innovation directions. This paper describes the process of how to derive nano-enabled drug delivery-related information from global research and patent databases in an effort to perform comprehensive global analysis of research trends and directions, along with collaborative patterns. Copyright © 2014 Elsevier Inc. All rights reserved.
Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola
2016-09-01
In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic traits characteristic of high-performance clones and enables informed decisions on which clones provide a good match for a particular process platform. The proposed approach also provides a mechanistic link between observed clone phenotype, process setup, and feeding regimes, and thereby offers concrete starting points for subsequent process optimization. Biotechnol. Bioeng. 2016;113: 2005-2019. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Asymmetric transmission and reflection spectra of FBG in single-multi-single mode fiber structure.
Chai, Quan; Liu, Yanlei; Zhang, Jianzhong; Yang, Jun; Chen, Yujin; Yuan, Libo; Peng, Gang-Ding
2015-05-04
We give a comprehensive theoretical analysis and simulation of a FBG in single-multi-single mode fiber structure (FBG-in-SMS), based on the coupled mode analysis and the mode interference analysis. This enables us to explain the experimental observations, its asymmetric transmission and reflection spectra with the similar temperature responses near the spectral range of Bragg wavelengths. The transmission spectrum shift during FBG written-in process is observed and discussed. The analysis results are useful in the design of the SMS structure based sensors and filters.
NASA Astrophysics Data System (ADS)
Altan, O.; Kemper, G.
2012-07-01
The GIS based analysis of the land use change of Istanbul delivers a huge and comprehensive database that can be used for further analysis. Trend analysis and scenarios enable a view to the future that highlights the needs for a proper planning. Also the understanding via comparison to other cities assists in order not to copy errors from other cities. GIS in combination with ancillary data open a wide field for managing the future of Istanbul.
Necklace: combining reference and assembled transcriptomes for more comprehensive RNA-Seq analysis.
Davidson, Nadia M; Oshlack, Alicia
2018-05-01
RNA sequencing (RNA-seq) analyses can benefit from performing a genome-guided and de novo assembly, in particular for species where the reference genome or the annotation is incomplete. However, tools for integrating an assembled transcriptome with reference annotation are lacking. Necklace is a software pipeline that runs genome-guided and de novo assembly and combines the resulting transcriptomes with reference genome annotations. Necklace constructs a compact but comprehensive superTranscriptome out of the assembled and reference data. Reads are subsequently aligned and counted in preparation for differential expression testing. Necklace allows a comprehensive transcriptome to be built from a combination of assembled and annotated transcripts, which results in a more comprehensive transcriptome for the majority of organisms. In addition RNA-seq data are mapped back to this newly created superTranscript reference to enable differential expression testing with standard methods.
Kadić, Elma; Moniz, Raymond J; Huo, Ying; Chi, An; Kariv, Ilona
2017-02-02
Comprehensive understanding of cellular immune subsets involved in regulation of tumor progression is central to the development of cancer immunotherapies. Single cell immunophenotyping has historically been accomplished by flow cytometry (FC) analysis, enabling the analysis of up to 18 markers. Recent advancements in mass cytometry (MC) have facilitated detection of over 50 markers, utilizing high resolving power of mass spectrometry (MS). This study examined an analytical and operational feasibility of MC for an in-depth immunophenotyping analysis of the tumor microenvironment, using the commercial CyTOF™ instrument, and further interrogated challenges in managing the integrity of tumor specimens. Initial longitudinal studies with frozen peripheral blood mononuclear cells (PBMCs) showed minimal MC inter-assay variability over nine independent runs. In addition, detection of common leukocyte lineage markers using MC and FC detection confirmed that these methodologies are comparable in cell subset identification. An advanced multiparametric MC analysis of 39 total markers enabled a comprehensive evaluation of cell surface marker expression in fresh and cryopreserved tumor samples. This comparative analysis revealed significant reduction of expression levels of multiple markers upon cryopreservation. Most notably myeloid derived suppressor cells (MDSC), defined by co-expression of CD66b + and CD15 + , HLA-DR dim and CD14 - phenotype, were undetectable in frozen samples. These results suggest that optimization and evaluation of cryopreservation protocols is necessary for accurate biomarker discovery in frozen tumor specimens.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics. PMID:24950184
NASA Astrophysics Data System (ADS)
Vidovič, Luka; Milanič, Matija; Majaron, Boris
2015-07-01
We combine pulsed photothermal radiometry (PPTR) depth profiling with diffuse reflectance spectroscopy (DRS) measurements for a comprehensive analysis of bruise evolution in vivo. While PPTR enables extraction of detailed depth distribution and concentration profiles of selected absorbers (e.g. melanin, hemoglobin), DRS provides information in a wide range of visible wavelengths and thus offers an additional insight into dynamics of the hemoglobin degradation products. Combining the two approaches enables us to quantitatively characterize bruise evolution dynamics. Our results indicate temporal variations of the bruise evolution parameters in the course of bruise self-healing process. The obtained parameter values and trends represent a basis for a future development of an objective technique for bruise age determination.
Giacomelli, Michael G.; Yoshitake, Tadayuki; Cahill, Lucas C.; Vardeh, Hilde; Quintana, Liza M.; Faulkner-Jones, Beverly E.; Brooker, Jeff; Connolly, James L.; Fujimoto, James G.
2018-01-01
The ability to histologically assess surgical specimens in real-time is a long-standing challenge in cancer surgery, including applications such as breast conserving therapy (BCT). Up to 40% of women treated with BCT for breast cancer require a repeat surgery due to postoperative histological findings of close or positive surgical margins using conventional formalin fixed paraffin embedded histology. Imaging technologies such as nonlinear microscopy (NLM), combined with exogenous fluorophores can rapidly provide virtual H&E imaging of surgical specimens without requiring microtome sectioning, facilitating intraoperative assessment of margin status. However, the large volume of typical surgical excisions combined with the need for rapid assessment, make comprehensive cellular resolution margin assessment during surgery challenging. To address this limitation, we developed a multiscale, real-time microscope with variable magnification NLM and real-time, co-registered position display using a widefield white light imaging system. Margin assessment can be performed rapidly under operator guidance to image specific regions of interest located using widefield imaging. Using simulated surgical margins dissected from human breast excisions, we demonstrate that multi-centimeter margins can be comprehensively imaged at cellular resolution, enabling intraoperative margin assessment. These methods are consistent with pathology assessment performed using frozen section analysis (FSA), however NLM enables faster and more comprehensive assessment of surgical specimens because imaging can be performed without freezing and cryo-sectioning. Therefore, NLM methods have the potential to be applied to a wide range of intra-operative applications. PMID:29761001
Jancey, Jonine; Howat, Peter; Ledger, Melissa; Lee, Andy H.
2013-01-01
Introduction Workplace health promotion programs to prevent overweight and obesity in office-based employees should be evidence-based and comprehensive and should consider behavioral, social, organizational, and environmental factors. The objective of this study was to identify barriers to and enablers of physical activity and nutrition as well as intervention strategies for health promotion in office-based workplaces in the Perth, Western Australia, metropolitan area in 2012. Methods We conducted an online survey of 111 employees from 55 organizations. The online survey investigated demographics, individual and workplace characteristics, barriers and enablers, intervention-strategy preferences, and physical activity and nutrition behaviors. We used χ2 and Mann–Whitney U statistics to test for differences between age and sex groups for barriers and enablers, intervention-strategy preferences, and physical activity and nutrition behaviors. Stepwise multiple regression analysis determined factors that affect physical activity and nutrition behaviors. Results We identified several factors that affected physical activity and nutrition behaviors, including the most common barriers (“too tired” and “access to unhealthy food”) and enablers (“enjoy physical activity” and “nutrition knowledge”). Intervention-strategy preferences demonstrated employee support for health promotion in the workplace. Conclusion The findings provide useful insights into employees’ preferences for interventions; they can be used to develop comprehensive programs for evidence-based workplace health promotion that consider environmental and policy influences as well as the individual. PMID:24028834
Tighe, Elizabeth L; Schatschneider, Christopher
2016-07-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.
SECIMTools: a suite of metabolomics data analysis tools.
Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M
2018-04-20
Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.
Metabolomic Analysis in Heart Failure.
Ikegami, Ryutaro; Shimizu, Ippei; Yoshida, Yohko; Minamino, Tohru
2017-12-25
It is thought that at least 6,500 low-molecular-weight metabolites exist in humans, and these metabolites have various important roles in biological systems in addition to proteins and genes. Comprehensive assessment of endogenous metabolites is called metabolomics, and recent advances in this field have enabled us to understand the critical role of previously unknown metabolites or metabolic pathways in the cardiovascular system. In this review, we will focus on heart failure and how metabolomic analysis has contributed to improving our understanding of the pathogenesis of this critical condition.
Orchestrating high-throughput genomic analysis with Bioconductor
Huber, Wolfgang; Carey, Vincent J.; Gentleman, Robert; Anders, Simon; Carlson, Marc; Carvalho, Benilton S.; Bravo, Hector Corrada; Davis, Sean; Gatto, Laurent; Girke, Thomas; Gottardo, Raphael; Hahne, Florian; Hansen, Kasper D.; Irizarry, Rafael A.; Lawrence, Michael; Love, Michael I.; MacDonald, James; Obenchain, Valerie; Oleś, Andrzej K.; Pagès, Hervé; Reyes, Alejandro; Shannon, Paul; Smyth, Gordon K.; Tenenbaum, Dan; Waldron, Levi; Morgan, Martin
2015-01-01
Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors. PMID:25633503
Comprehensive lipid analysis: a powerful metanomic tool for predictive and diagnostic medicine.
Watkins, S M
2000-09-01
The power and accuracy of predictive diagnostics stand to improve dramatically as a result of lipid metanomics. The high definition of data obtained with this approach allows multiple rather than single metabolites to be used in markers for a group. Since as many as 40 fatty acids are quantified from each lipid class, and up to 15 lipid classes can be quantified easily, more than 600 individual lipid metabolites can be measured routinely for each sample. Because these analyses are comprehensive, only the most appropriate and unique metabolites are selected for their predictive value. Thus, comprehensive lipid analysis promises to greatly improve predictive diagnostics for phenotypes that directly or peripherally involve lipids. A broader and possibly more exciting aspect of this technology is the generation of metabolic profiles that are not simply markers for disease, but metabolic maps that can be used to identify specific genes or activities that cause or influence the disease state. Metanomics is, in essence, functional genomics from metabolite analysis. By defining the metabolic basis for phenotype, researchers and clinicians will have an extraordinary opportunity to understand and treat disease. Much in the same way that gene chips allow researchers to observe the complex expression response to a stimulus, metanomics will enable researchers to observe the complex metabolic interplay responsible for defining phenotype. By extending this approach beyond the observation of individual dysregulations, medicine will begin to profile not single diseases, but health. As health is the proper balance of all vital metabolic pathways, comprehensive or metanomic analysis lends itself very well to identifying the metabolite distributions necessary for optimum health. Comprehensive and quantitative analysis of lipids would provide this degree of diagnostic power to researchers and clinicians interested in mining metabolic profiles for biological meaning.
Linking Fuel Inventories With Atmospheric Data for Assessment of Fire Danger
Christopher W. Woodall; Joseph Charney; Greg Liknes; Brian Potter
2006-01-01
Combining forest fuel maps and real-time atmospheric data may enable creation of more dynamic and comprehensive fire danger assessments. The goal of this study was to combine fuel maps, based on data from the Forest Inventory and Analysis (FIA) program of the U.S. Department of Agriculture Forest Service, with real-time atmospheric data to create a more dynamic index...
What's the fire danger now? Linking fuel inventories with atmospheric data
Christopher W. Woodall; Joseph J. Charney; Greg C. Liknes; Brian E. Potter
2005-01-01
The combination of forest fuel maps with real-time atmospheric data may enable the creation of more dynamic and comprehensive assessments of fire danger. The goal of this study was to combine fuel maps, based on data from the Forest Inventory and Analysis (FIA) program of the USDA Forest Service, with real-time atmospheric data for the creation of a more dynamic index...
Browsing Space Weather Data and Models with the Integrated Space Weather Analysis (iSWA) System
NASA Technical Reports Server (NTRS)
Maddox, Marlo M.; Mullinix, Richard E.; Berrios, David H.; Hesse, Michael; Rastaetter, Lutz; Pulkkinen, Antti; Hourcle, Joseph A.; Thompson, Barbara J.
2011-01-01
The Integrated Space Weather Analysis (iSWA) System is a comprehensive web-based platform for space weather information that combines data from solar, heliospheric and geospace observatories with forecasts based on the most advanced space weather models. The iSWA system collects, generates, and presents a wide array of space weather resources in an intuitive, user-configurable, and adaptable format - thus enabling users to respond to current and future space weather impacts as well as enabling post-impact analysis. iSWA currently provides over 200 data and modeling products, and features a variety of tools that allow the user to browse, combine, and examine data and models from various sources. This presentation will consist of a summary of the iSWA products and an overview of the customizable user interfaces, and will feature several tutorial demonstrations highlighting the interactive tools and advanced capabilities.
A Comprehensive Web-Based Patient Information Environment
2001-10-25
hospitals. Keywords - I. INTRODUCTION This paper describes a comprehensive, web-enabled, patient - centric medical information system called PiRiLiS...clinically focused. The system was found to reduce time for medical administration. The ability to view the entire patient record at anytime, anywhere in...Abstract- The paper describes a new type of medical information environment which is fully web-enabled. The system can handle any type medical
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-01-01
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-09-14
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.
NASA Astrophysics Data System (ADS)
Ney, Michael; Abdulhalim, Ibrahim
2016-03-01
Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.
Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae
Reguly, Teresa; Breitkreutz, Ashton; Boucher, Lorrie; Breitkreutz, Bobby-Joe; Hon, Gary C; Myers, Chad L; Parsons, Ainslie; Friesen, Helena; Oughtred, Rose; Tong, Amy; Stark, Chris; Ho, Yuen; Botstein, David; Andrews, Brenda; Boone, Charles; Troyanskya, Olga G; Ideker, Trey; Dolinski, Kara; Batada, Nizar N; Tyers, Mike
2006-01-01
Background The study of complex biological networks and prediction of gene function has been enabled by high-throughput (HTP) methods for detection of genetic and protein interactions. Sparse coverage in HTP datasets may, however, distort network properties and confound predictions. Although a vast number of well substantiated interactions are recorded in the scientific literature, these data have not yet been distilled into networks that enable system-level inference. Results We describe here a comprehensive database of genetic and protein interactions, and associated experimental evidence, for the budding yeast Saccharomyces cerevisiae, as manually curated from over 31,793 abstracts and online publications. This literature-curated (LC) dataset contains 33,311 interactions, on the order of all extant HTP datasets combined. Surprisingly, HTP protein-interaction datasets currently achieve only around 14% coverage of the interactions in the literature. The LC network nevertheless shares attributes with HTP networks, including scale-free connectivity and correlations between interactions, abundance, localization, and expression. We find that essential genes or proteins are enriched for interactions with other essential genes or proteins, suggesting that the global network may be functionally unified. This interconnectivity is supported by a substantial overlap of protein and genetic interactions in the LC dataset. We show that the LC dataset considerably improves the predictive power of network-analysis approaches. The full LC dataset is available at the BioGRID () and SGD () databases. Conclusion Comprehensive datasets of biological interactions derived from the primary literature provide critical benchmarks for HTP methods, augment functional prediction, and reveal system-level attributes of biological networks. PMID:16762047
Artificial Intelligence and Language Comprehension.
ERIC Educational Resources Information Center
National Inst. of Education (DHEW), Washington, DC. Basic Skills Group. Learning Div.
The three papers in this volume concerning artificial intelligence and language comprehension were commissioned by the National Institute of Education to further the understanding of the cognitive processes that enable people to comprehend what they read. The first paper, "Artificial Intelligence and Language Comprehension," by Terry Winograd,…
ERIC Educational Resources Information Center
James, Susanne M.
2011-01-01
Special educators' knowledge of reading concepts are not only influenced by their understanding of the subject matter, but also by an amalgam of content and pedagogy that enables teachers to integrate this information to meet the diverse needs of students with disabilities. This study documented the conceptual knowledge that special education…
Systems-Level Analysis of Innate Immunity
Zak, Daniel E.; Tam, Vincent C.; Aderem, Alan
2014-01-01
Systems-level analysis of biological processes strives to comprehensively and quantitatively evaluate the interactions between the relevant molecular components over time, thereby enabling development of models that can be employed to ultimately predict behavior. Rapid development in measurement technologies (omics), when combined with the accessible nature of the cellular constituents themselves, is allowing the field of innate immunity to take significant strides toward this lofty goal. In this review, we survey exciting results derived from systems biology analyses of the immune system, ranging from gene regulatory networks to influenza pathogenesis and systems vaccinology. PMID:24655298
Comprehensive Analysis of Immunological Synapse Phenotypes Using Supported Lipid Bilayers.
Valvo, Salvatore; Mayya, Viveka; Seraia, Elena; Afrose, Jehan; Novak-Kotzer, Hila; Ebner, Daniel; Dustin, Michael L
2017-01-01
Supported lipid bilayers (SLB) formed on glass substrates have been a useful tool for study of immune cell signaling since the early 1980s. The mobility of lipid-anchored proteins in the system, first described for antibodies binding to synthetic phospholipid head groups, allows for the measurement of two-dimensional binding reactions and signaling processes in a single imaging plane over time or for fixed samples. The fragility of SLB and the challenges of building and validating individual substrates limit most experimenters to ~10 samples per day, perhaps increasing this few-fold when examining fixed samples. Successful experiments might then require further days to fully analyze. We present methods for automation of many steps in SLB formation, imaging in 96-well glass bottom plates, and analysis that enables >100-fold increase in throughput for fixed samples and wide-field fluorescence. This increased throughput will allow better coverage of relevant parameters and more comprehensive analysis of aspects of the immunological synapse that are well reconstituted by SLB.
IDEOM: an Excel interface for analysis of LC-MS-based metabolomics data.
Creek, Darren J; Jankevics, Andris; Burgess, Karl E V; Breitling, Rainer; Barrett, Michael P
2012-04-01
The application of emerging metabolomics technologies to the comprehensive investigation of cellular biochemistry has been limited by bottlenecks in data processing, particularly noise filtering and metabolite identification. IDEOM provides a user-friendly data processing application that automates filtering and identification of metabolite peaks, paying particular attention to common sources of noise and false identifications generated by liquid chromatography-mass spectrometry (LC-MS) platforms. Building on advanced processing tools such as mzMatch and XCMS, it allows users to run a comprehensive pipeline for data analysis and visualization from a graphical user interface within Microsoft Excel, a familiar program for most biological scientists. IDEOM is provided free of charge at http://mzmatch.sourceforge.net/ideom.html, as a macro-enabled spreadsheet (.xlsb). Implementation requires Microsoft Excel (2007 or later). R is also required for full functionality. michael.barrett@glasgow.ac.uk Supplementary data are available at Bioinformatics online.
Improving Student Comprehension Skills through Instructional Strategies.
ERIC Educational Resources Information Center
Sharp, Patricia; Ashby, Doris
This report intends to describe a program designed to enhance reading comprehension. Reading comprehension relies on skills that enable students to remember facts, draw out main ideas, make inferences, and relate reading to personal experiences. The focus group consisted of middle and high school students in a metropolitan area in northern…
Measuring Speech Comprehensibility in Students with Down Syndrome
ERIC Educational Resources Information Center
Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen
2016-01-01
Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…
Breen, Andrew J; Xie, Kelvin Y; Moody, Michael P; Gault, Baptiste; Yen, Hung-Wei; Wong, Christopher C; Cairney, Julie M; Ringer, Simon P
2014-08-01
Atom probe is a powerful technique for studying the composition of nano-precipitates, but their morphology within the reconstructed data is distorted due to the so-called local magnification effect. A new technique has been developed to mitigate this limitation by characterizing the distribution of the surrounding matrix atoms, rather than those contained within the nano-precipitates themselves. A comprehensive chemical analysis enables further information on size and chemistry to be obtained. The method enables new insight into the morphology and chemistry of niobium carbonitride nano-precipitates within ferrite for a series of Nb-microalloyed ultra-thin cast strip steels. The results are supported by complementary high-resolution transmission electron microscopy.
A strategic approach to workforce development for local public health.
Bryant, Beverley; Ward, Megan
2017-11-09
In 2009, Peel Public Health set a vision to transform the work of public health from efficient delivery of public health services as defined by provincial mandate to the robust analysis of the health status of the local population and selection and implementation of programming to achieve best health outcomes. A strategic approach to the workforce was a key enabler. PPH is a public health unit in Ontario that serves 1.4 million people. An organization-wide strategic workforce development program was instituted. It is theory-based, evidence-informed and data-driven. A first step was a conceptual framework, followed by interventions in workforce planning, human resources management, and capacity development. The program was built on evidence reviews, theory, and public health core competencies. Interventions spread across the employee work-life span. Capacity development based on the public health core competencies is a main focus, particularly analytical capacity to support decision-making. Employees gain skill and knowledge in comprehensive population health. Leadership evolves as work shifts to the analysis of health status and development of interventions. Effective human resource processes ensure appropriate job design, recruitment and orientation. Analysis of the workforce leads to vigorous employee development to ensure a strong pool of potential leadership successors. Theory, research evidence, and data provide a robust foundation for workforce development. Competencies are important inputs to job descriptions, recruitment, training, and human resource processes. A comprehensive workforce development strategy enables the development of a skilled workforce capable of responding to the needs of the population it serves.
Novotná, H; Kmiecik, O; Gałązka, M; Krtková, V; Hurajová, A; Schulzová, V; Hallmann, E; Rembiałkowska, E; Hajšlová, J
2012-01-01
The rapidly growing demand for organic food requires the availability of analytical tools enabling their authentication. Recently, metabolomic fingerprinting/profiling has been demonstrated as a challenging option for a comprehensive characterisation of small molecules occurring in plants, since their pattern may reflect the impact of various external factors. In a two-year pilot study, concerned with the classification of organic versus conventional crops, ambient mass spectrometry consisting of a direct analysis in real time (DART) ion source and a time-of-flight mass spectrometer (TOFMS) was employed. This novel methodology was tested on 40 tomato and 24 pepper samples grown under specified conditions. To calculate statistical models, the obtained data (mass spectra) were processed by the principal component analysis (PCA) followed by linear discriminant analysis (LDA). The results from the positive ionisation mode enabled better differentiation between organic and conventional samples than the results from the negative mode. In this case, the recognition ability obtained by LDA was 97.5% for tomato and 100% for pepper samples and the prediction abilities were above 80% for both sample sets. The results suggest that the year of production had stronger influence on the metabolomic fingerprints compared with the type of farming (organic versus conventional). In any case, DART-TOFMS is a promising tool for rapid screening of samples. Establishing comprehensive (multi-sample) long-term databases may further help to improve the quality of statistical classification models.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Documet, Jorge; Garrison, Kathleen A.; Winstein, Carolee J.; Liu, Brent
2012-02-01
Stroke is a major cause of adult disability. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (I-CARE) clinical trial aims to evaluate a therapy for arm rehabilitation after stroke. A primary outcome measure is correlative analysis between stroke lesion characteristics and standard measures of rehabilitation progress, from data collected at seven research facilities across the country. Sharing and communication of brain imaging and behavioral data is thus a challenge for collaboration. A solution is proposed as a web-based system with tools supporting imaging and informatics related data. In this system, users may upload anonymized brain images through a secure internet connection and the system will sort the imaging data for storage in a centralized database. Users may utilize an annotation tool to mark up images. In addition to imaging informatics, electronic data forms, for example, clinical data forms, are also integrated. Clinical information is processed and stored in the database to enable future data mining related development. Tele-consultation is facilitated through the development of a thin-client image viewing application. For convenience, the system supports access through desktop PC, laptops, and iPAD. Thus, clinicians may enter data directly into the system via iPAD while working with participants in the study. Overall, this comprehensive imaging informatics system enables users to collect, organize and analyze stroke cases efficiently.
Evaluation of Eleventh Grade Turkish Pupils' Comprehension of General Chemistry Concepts
ERIC Educational Resources Information Center
Belge Can, Hatice; Boz, Yezdan
2011-01-01
The main purpose of this study is to evaluate eleventh grade Turkish pupils' comprehension of various general chemistry concepts which in turn enables to investigate chemistry concepts which are easier and harder for students to comprehend. Examining the effect of gender and last semester chemistry course grades on pupils' comprehension of general…
The Effects of Self-Questioning on Reading Comprehension: A Literature Review
ERIC Educational Resources Information Center
Joseph, Laurice M.; Alber-Morgan, Sheila; Cullen, Jennifer; Rouse, Christina
2016-01-01
The ability to monitor one's own reading comprehension is a critical skill for deriving meaning from text. Self-questioning during reading is a strategy that enables students to monitor their reading comprehension and increases their ability to learn independently. The purpose of this article was to review experimental research studies that…
Moore, Andrew; Crossley, Anne; Ng, Bernard; Phillips, Lawrence; Sancak, Özgür; Rainsford, K D
2017-10-01
To test the ability of a multicriteria decision analysis (MCDA) model to incorporate disparate data sources of varying quality along with clinical judgement in a benefit-risk assessment of six well-known pain-relief drugs. Six over-the-counter (OTC) analgesics were evaluated against three favourable effects and eight unfavourable effects by seven experts who specialise in the relief of pain, two in a 2-day facilitated workshop whose input data and judgements were later peer-reviewed by five additional experts. Ibuprofen salts and solubilised emerged with the best benefit-risk profile, followed by naproxen, ibuprofen acid, diclofenac, paracetamol and aspirin. Multicriteria decision analysis enabled participants to evaluate the OTC analgesics against a range of favourable and unfavourable effects in a group setting that enabled all issues to be openly aired and debated. The model was easily communicated and understood by the peer reviewers, so the model should be comprehensible to physicians, pharmacists and other health professionals. © 2017 Royal Pharmaceutical Society.
Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Besusparis, Justinas; Meskauskas, Raimundas; Baltrusaityte, Indra; Iqbal, Yasir; Laurinavicius, Arvydas
2015-10-19
Digital image analysis (DIA) enables higher accuracy, reproducibility, and capacity to enumerate cell populations by immunohistochemistry; however, the most unique benefits may be obtained by evaluating the spatial distribution and intra-tissue variance of markers. The proliferative activity of breast cancer tissue, estimated by the Ki67 labeling index (Ki67 LI), is a prognostic and predictive biomarker requiring robust measurement methodologies. We performed DIA on whole-slide images (WSI) of 302 surgically removed Ki67-stained breast cancer specimens; the tumour classifier algorithm was used to automatically detect tumour tissue but was not trained to distinguish between invasive and non-invasive carcinoma cells. The WSI DIA-generated data were subsampled by hexagonal tiling (HexT). Distribution and texture parameters were compared to conventional WSI DIA and pathology report data. Factor analysis of the data set, including total numbers of tumor cells, the Ki67 LI and Ki67 distribution, and texture indicators, extracted 4 factors, identified as entropy, proliferation, bimodality, and cellularity. The factor scores were further utilized in cluster analysis, outlining subcategories of heterogeneous tumors with predominant entropy, bimodality, or both at different levels of proliferative activity. The methodology also allowed the visualization of Ki67 LI heterogeneity in tumors and the automated detection and quantitative evaluation of Ki67 hotspots, based on the upper quintile of the HexT data, conceptualized as the "Pareto hotspot". We conclude that systematic subsampling of DIA-generated data into HexT enables comprehensive Ki67 LI analysis that reflects aspects of intra-tumor heterogeneity and may serve as a methodology to improve digital immunohistochemistry in general.
Mast, Fred D.; Ratushny, Alexander V.
2014-01-01
Systems cell biology melds high-throughput experimentation with quantitative analysis and modeling to understand many critical processes that contribute to cellular organization and dynamics. Recently, there have been several advances in technology and in the application of modeling approaches that enable the exploration of the dynamic properties of cells. Merging technology and computation offers an opportunity to objectively address unsolved cellular mechanisms, and has revealed emergent properties and helped to gain a more comprehensive and fundamental understanding of cell biology. PMID:25225336
Campodonico, Miguel A; Vaisman, Daniela; Castro, Jean F; Razmilic, Valeria; Mercado, Francesca; Andrews, Barbara A; Feist, Adam M; Asenjo, Juan A
2016-12-01
Acidithiobacillus ferrooxidans is a gram-negative chemolithoautotrophic γ-proteobacterium. It typically grows at an external pH of 2 using the oxidation of ferrous ions by oxygen, producing ferric ions and water, while fixing carbon dioxide from the environment. A. ferrooxidans is of great interest for biomining and environmental applications, as it can process mineral ores and alleviate the negative environmental consequences derived from the mining processes. In this study, the first genome-scale metabolic reconstruction of A. ferrooxidans ATCC 23270 was generated ( i MC507). A total of 587 metabolic and transport/exchange reactions, 507 genes and 573 metabolites organized in over 42 subsystems were incorporated into the model. Based on a new genetic algorithm approach, that integrates flux balance analysis, chemiosmotic theory, and physiological data, the proton translocation stoichiometry for a number of enzymes and maintenance parameters under aerobic chemolithoautotrophic conditions using three different electron donors were estimated. Furthermore, a detailed electron transfer and carbon flux distributions during chemolithoautotrophic growth using ferrous ion, tetrathionate and thiosulfate were determined and reported. Finally, 134 growth-coupled designs were calculated that enables Extracellular Polysaccharide production. i MC507 serves as a knowledgebase for summarizing and categorizing the information currently available for A. ferrooxidans and enables the understanding and engineering of Acidithiobacillus and similar species from a comprehensive model-driven perspective for biomining applications.
CASTIN: a system for comprehensive analysis of cancer-stromal interactome.
Komura, Daisuke; Isagawa, Takayuki; Kishi, Kazuki; Suzuki, Ryohei; Sato, Reiko; Tanaka, Mariko; Katoh, Hiroto; Yamamoto, Shogo; Tatsuno, Kenji; Fukayama, Masashi; Aburatani, Hiroyuki; Ishikawa, Shumpei
2016-11-09
Cancer microenvironment plays a vital role in cancer development and progression, and cancer-stromal interactions have been recognized as important targets for cancer therapy. However, identifying relevant and druggable cancer-stromal interactions is challenging due to the lack of quantitative methods to analyze whole cancer-stromal interactome. We present CASTIN (CAncer-STromal INteractome analysis), a novel framework for the evaluation of cancer-stromal interactome from RNA-Seq data using cancer xenograft models. For each ligand-receptor interaction which is derived from curated protein-protein interaction database, CASTIN summarizes gene expression profiles of cancer and stroma into three evaluation indices. These indices provide quantitative evaluation and comprehensive visualization of interactome, and thus enable to identify critical cancer-microenvironment interactions, which would be potential drug targets. We applied CASTIN to the dataset of pancreas ductal adenocarcinoma, and successfully characterized the individual cancer in terms of cancer-stromal relationships, and identified both well-known and less-characterized druggable interactions. CASTIN provides comprehensive view of cancer-stromal interactome and is useful to identify critical interactions which may serve as potential drug targets in cancer-microenvironment. CASTIN is available at: http://github.com/tmd-gpat/CASTIN .
2012-01-01
Background Denmark has implemented a comprehensive, nationwide pharmaceutical information system, and this system has been evaluated by the Danish Council of Ethics. The system can be seen as an exemplar of a comprehensive health information system for clinical use. Analysis The paper analyses 1) how informed consent can be implemented in the system and how different implementations create different impacts on autonomy and control of information, and 2) arguments directed towards justifying not seeking informed consent in this context. Results and Conclusion Based on the analysis a heuristic is provided which enables a ranking and estimation of the impact on autonomy and control of information of different options for consent to entry of data into the system and use of data from the system. The danger of routinisation of consent is identified. The Danish pharmaceutical information system raises issues in relation to autonomy and control of information, issues that will also occur in relation to other similar comprehensive health information systems. Some of these issues are well understood and their impact can be judged using the heuristic which is provided. More research is, however needed in relation to routinisation of consent. PMID:23157854
Earth-Science Data Co-Locating Tool
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Block, Gary L.
2012-01-01
This software is used to locate Earth-science satellite data and climate-model analysis outputs in space and time. This enables the direct comparison of any set of data with different spatial and temporal resolutions. It is written in three separate modules that are clearly separated for their functionality and interface with other modules. This enables a fast development of supporting any new data set. In this updated version of the tool, several new front ends are developed for new products. This software finds co-locatable data pairs for given sets of data products and creates new data products that share the same spatial and temporal coordinates. This facilitates the direct comparison between the two heterogeneous datasets and the comprehensive and synergistic use of the datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCorkle, D.; Yang, C.; Jordan, T.
2007-06-01
Modeling and simulation tools are becoming pervasive in the process engineering practice of designing advanced power generation facilities. These tools enable engineers to explore many what-if scenarios before cutting metal or constructing a pilot scale facility. While such tools enable investigation of crucial plant design aspects, typical commercial process simulation tools such as Aspen Plus®, gPROMS®, and HYSYS® still do not explore some plant design information, including computational fluid dynamics (CFD) models for complex thermal and fluid flow phenomena, economics models for policy decisions, operational data after the plant is constructed, and as-built information for use in as-designed models. Softwaremore » tools must be created that allow disparate sources of information to be integrated if environments are to be constructed where process simulation information can be accessed. At the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL), the Advanced Process Engineering Co-Simulator (APECS) has been developed as an integrated software suite that combines process simulation (e.g., Aspen Plus) and high-fidelity equipment simulation (e.g., Fluent® CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper, we discuss the initial phases of integrating APECS with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite utilizes the ActiveX (OLE Automation) controls in Aspen Plus wrapped by the CASI library developed by Reaction Engineering International to run the process simulation and query for unit operation results. This integration permits any application that uses the VE-Open interface to integrate with APECS co-simulations, enabling construction of the comprehensive virtual engineering environment needed for the rapid engineering of advanced power generation facilities.« less
In-Depth Understanding. A Computer Model of Integrated Processing for Narrative Comprehension.
1982-05-01
Mr. Smith at the First National Bank , which he had never returned. 218 Although both SY-I and DIVORCE-I involve the lending and returning of money...CLOTHESO. "Clothes are also changed as an enablement for a social activity. For example, pajamas enable sleeping and football uniform@ enable playing
Oba, Mami; Tsuchiaka, Shinobu; Omatsu, Tsutomu; Katayama, Yukie; Otomaru, Konosuke; Hirata, Teppei; Aoki, Hiroshi; Murata, Yoshiteru; Makino, Shinji; Nagai, Makoto; Mizutani, Tetsuya
2018-01-08
We tested usefulness of a target enrichment system SureSelect, a comprehensive viral nucleic acid detection method, for rapid identification of viral pathogens in feces samples of cattle, pigs and goats. This system enriches nucleic acids of target viruses in clinical/field samples by using a library of biotinylated RNAs with sequences complementary to the target viruses. The enriched nucleic acids are amplified by PCR and subjected to next generation sequencing to identify the target viruses. In many samples, SureSelect target enrichment method increased efficiencies for detection of the viruses listed in the biotinylated RNA library. Furthermore, this method enabled us to determine nearly full-length genome sequence of porcine parainfluenza virus 1 and greatly increased Breadth, a value indicating the ratio of the mapping consensus length in the reference genome, in pig samples. Our data showed usefulness of SureSelect target enrichment system for comprehensive analysis of genomic information of various viruses in field samples. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Cole, Charles; Mandelblatt, Bertie
2000-01-01
Uses Kintsch's proposition-based construction-integration theory of discourse comprehension to detail the user coding operations that occur in each of the three subsystems (Perception, Comprehension, Application) in which users process an information retrieval systems (IRS) message. Describes an IRS device made up of two separate parts that enable…
Schott, Ann-Sophie; Behr, Jürgen; Quinn, Jennifer; Vogel, Rudi F.
2016-01-01
Lactic acid bacteria (LAB) are widely used as starter cultures in the manufacture of foods. Upon preparation, these cultures undergo various stresses resulting in losses of survival and fitness. In order to find conditions for the subsequent identification of proteomic biomarkers and their exploitation for preconditioning of strains, we subjected Lactobacillus (Lb.) paracasei subsp. paracasei TMW 1.1434 (F19) to different stress qualities (osmotic stress, oxidative stress, temperature stress, pH stress and starvation stress). We analysed the dynamics of its stress responses based on the expression of stress proteins using MALDI-TOF mass spectrometry (MS), which has so far been used for species identification. Exploiting the methodology of accumulating protein expression profiles by MALDI-TOF MS followed by the statistical evaluation with cluster analysis and discriminant analysis of principle components (DAPC), it was possible to monitor the expression of low molecular weight stress proteins, identify a specific time point when the expression of stress proteins reached its maximum, and statistically differentiate types of adaptive responses into groups. Above the specific result for F19 and its stress response, these results demonstrate the discriminatory power of MALDI-TOF MS to characterize even dynamics of stress responses of bacteria and enable a knowledge-based focus on the laborious identification of biomarkers and stress proteins. To our knowledge, the implementation of MALDI-TOF MS protein profiling for the fast and comprehensive analysis of various stress responses is new to the field of bacterial stress responses. Consequently, we generally propose MALDI-TOF MS as an easy and quick method to characterize responses of microbes to different environmental conditions, to focus efforts of more elaborate approaches on time points and dynamics of stress responses. PMID:27783652
Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent
2017-01-01
Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
2010-01-01
Background An important focus of genomic science is the discovery and characterization of all functional elements within genomes. In silico methods are used in genome studies to discover putative regulatory genomic elements (called words or motifs). Although a number of methods have been developed for motif discovery, most of them lack the scalability needed to analyze large genomic data sets. Methods This manuscript presents WordSeeker, an enumerative motif discovery toolkit that utilizes multi-core and distributed computational platforms to enable scalable analysis of genomic data. A controller task coordinates activities of worker nodes, each of which (1) enumerates a subset of the DNA word space and (2) scores words with a distributed Markov chain model. Results A comprehensive suite of performance tests was conducted to demonstrate the performance, speedup and efficiency of WordSeeker. The scalability of the toolkit enabled the analysis of the entire genome of Arabidopsis thaliana; the results of the analysis were integrated into The Arabidopsis Gene Regulatory Information Server (AGRIS). A public version of WordSeeker was deployed on the Glenn cluster at the Ohio Supercomputer Center. Conclusion WordSeeker effectively utilizes concurrent computing platforms to enable the identification of putative functional elements in genomic data sets. This capability facilitates the analysis of the large quantity of sequenced genomic data. PMID:21210985
Grégory, Dubourg; Chaudet, Hervé; Lagier, Jean-Christophe; Raoult, Didier
2018-03-01
Describing the human hut gut microbiota is one the most exciting challenges of the 21 st century. Currently, high-throughput sequencing methods are considered as the gold standard for this purpose, however, they suffer from several drawbacks, including their inability to detect minority populations. The advent of mass-spectrometric (MS) approaches to identify cultured bacteria in clinical microbiology enabled the creation of the culturomics approach, which aims to establish a comprehensive repertoire of cultured prokaryotes from human specimens using extensive culture conditions. Areas covered: This review first underlines how mass spectrometric approaches have revolutionized clinical microbiology. It then highlights the contribution of MS-based methods to culturomics studies, paying particular attention to the extension of the human gut microbiota repertoire through the discovery of new bacterial species. Expert commentary: MS-based approaches have enabled cultivation methods to be resuscitated to study the human gut microbiota and thus to fill in the blanks left by high-throughput sequencing methods in terms of culturing minority populations. Continued efforts to recover new taxa using culture methods, combined with their rapid implementation in genomic databases, would allow for an exhaustive analysis of the gut microbiota through the use of a comprehensive approach.
Analysis of the mixing processes in the subtropical Advancetown Lake, Australia
NASA Astrophysics Data System (ADS)
Bertone, Edoardo; Stewart, Rodney A.; Zhang, Hong; O'Halloran, Kelvin
2015-03-01
This paper presents an extensive investigation of the mixing processes occurring in the subtropical monomictic Advancetown Lake, which is the main water body supplying the Gold Coast City in Australia. Meteorological, chemical and physical data were collected from weather stations, laboratory analysis of grab samples and an in-situ Vertical Profiling System (VPS), for the period 2008-2012. This comprehensive, high frequency dataset was utilised to develop a one-dimensional model of the vertical transport and mixing processes occurring along the water column. Multivariate analysis revealed that air temperature and rain forecasts enabled a reliable prediction of the strength of the lake stratification. Vertical diffusion is the main process driving vertical mixing, particularly during winter circulation. However, a high reservoir volume and warm winters can limit the degree of winter mixing, causing only partial circulation to occur, as was the case in 2013. This research study provides a comprehensive approach for understanding and predicting mixing processes for similar lakes, whenever high-frequency data are available from VPS or other autonomous water monitoring systems.
Shohaimi, Shamarina; Wei, Wong Yoke; Shariff, Zalilah Mohd
2014-01-01
Comprehensive feeding practices questionnaire (CFPQ) is an instrument specifically developed to evaluate parental feeding practices. It has been confirmed among children in America and applied to populations in France, Norway, and New Zealand. In order to extend the application of CFPQ, we conducted a factor structure validation of the translated version of CFPQ (CFPQ-M) using confirmatory factor analysis among mothers of primary school children (N = 397) in Malaysia. Several items were modified for cultural adaptation. Of 49 items, 39 items with loading factors >0.40 were retained in the final model. The confirmatory factor analysis revealed that the final model (twelve-factor model with 39 items and 2 error covariances) displayed the best fit for our sample (Chi-square = 1147; df = 634; P < 0.05; CFI = 0.900; RMSEA = 0.045; SRMR = 0.0058). The instrument with some modifications was confirmed among mothers of school children in Malaysia. The present study extends the usability of the CFPQ and enables researchers and parents to better understand the relationships between parental feeding practices and related problems such as childhood obesity.
Schilling, Birgit; Gibson, Bradford W.; Hunter, Christie L.
2017-01-01
Data-independent acquisition is a powerful mass spectrometry technique that enables comprehensive MS and MS/MS analysis of all detectable species, providing an information rich data file that can be mined deeply. Here, we describe how to acquire high-quality SWATH® Acquisition data to be used for large quantitative proteomic studies. We specifically focus on using variable sized Q1 windows for acquisition of MS/MS data for generating higher specificity quantitative data. PMID:28188533
Mast, Fred D; Ratushny, Alexander V; Aitchison, John D
2014-09-15
Systems cell biology melds high-throughput experimentation with quantitative analysis and modeling to understand many critical processes that contribute to cellular organization and dynamics. Recently, there have been several advances in technology and in the application of modeling approaches that enable the exploration of the dynamic properties of cells. Merging technology and computation offers an opportunity to objectively address unsolved cellular mechanisms, and has revealed emergent properties and helped to gain a more comprehensive and fundamental understanding of cell biology. © 2014 Mast et al.
2010-12-01
computers in 1953. HIL motion simulators were also built for the dynamic testing of vehicle com- ponents (e.g. suspensions, bodies ) with hydraulic or...complex, comprehensive mechanical systems can be simulated in real-time by parallel computers; examples include multi- body sys- tems, brake systems...hard constraints in a multivariable control framework. And the third aspect is the ability to perform online optimization. These aspects results in
Identifying influential factors of business process performance using dependency analysis
NASA Astrophysics Data System (ADS)
Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank
2011-02-01
We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.
Detecting Disease Specific Pathway Substructures through an Integrated Systems Biology Approach
Alaimo, Salvatore; Marceca, Gioacchino Paolo; Ferro, Alfredo; Pulvirenti, Alfredo
2017-01-01
In the era of network medicine, pathway analysis methods play a central role in the prediction of phenotype from high throughput experiments. In this paper, we present a network-based systems biology approach capable of extracting disease-perturbed subpathways within pathway networks in connection with expression data taken from The Cancer Genome Atlas (TCGA). Our system extends pathways with missing regulatory elements, such as microRNAs, and their interactions with genes. The framework enables the extraction, visualization, and analysis of statistically significant disease-specific subpathways through an easy to use web interface. Our analysis shows that the methodology is able to fill the gap in current techniques, allowing a more comprehensive analysis of the phenomena underlying disease states. PMID:29657291
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
Proteomic Analysis of the Mediator Complex Interactome in Saccharomyces cerevisiae.
Uthe, Henriette; Vanselow, Jens T; Schlosser, Andreas
2017-02-27
Here we present the most comprehensive analysis of the yeast Mediator complex interactome to date. Particularly gentle cell lysis and co-immunopurification conditions allowed us to preserve even transient protein-protein interactions and to comprehensively probe the molecular environment of the Mediator complex in the cell. Metabolic 15 N-labeling thereby enabled stringent discrimination between bona fide interaction partners and nonspecifically captured proteins. Our data indicates a functional role for Mediator beyond transcription initiation. We identified a large number of Mediator-interacting proteins and protein complexes, such as RNA polymerase II, general transcription factors, a large number of transcriptional activators, the SAGA complex, chromatin remodeling complexes, histone chaperones, highly acetylated histones, as well as proteins playing a role in co-transcriptional processes, such as splicing, mRNA decapping and mRNA decay. Moreover, our data provides clear evidence, that the Mediator complex interacts not only with RNA polymerase II, but also with RNA polymerases I and III, and indicates a functional role of the Mediator complex in rRNA processing and ribosome biogenesis.
NASA Astrophysics Data System (ADS)
Nadolny, K.; Kapłonek, W.
2014-08-01
The following work is an analysis of flatness deviations of a workpiece made of X2CrNiMo17-12-2 austenitic stainless steel. The workpiece surface was shaped using efficient machining techniques (milling, grinding, and smoothing). After the machining was completed, all surfaces underwent stylus measurements in order to obtain surface flatness and roughness parameters. For this purpose the stylus profilometer Hommel-Tester T8000 by Hommelwerke with HommelMap software was used. The research results are presented in the form of 2D surface maps, 3D surface topographies with extracted single profiles, Abbott-Firestone curves, and graphical studies of the Sk parameters. The results of these experimental tests proved the possibility of a correlation between flatness and roughness parameters, as well as enabled an analysis of changes in these parameters from shaping and rough grinding to finished machining. The main novelty of this paper is comprehensive analysis of measurement results obtained during a three-step machining process of austenitic stainless steel. Simultaneous analysis of individual machining steps (milling, grinding, and smoothing) enabled a complementary assessment of the process of shaping the workpiece surface macro- and micro-geometry, giving special consideration to minimize the flatness deviations
Towards semantic interoperability for electronic health records.
Garde, Sebastian; Knaup, Petra; Hovenga, Evelyn; Heard, Sam
2007-01-01
In the field of open electronic health records (EHRs), openEHR as an archetype-based approach is being increasingly recognised. It is the objective of this paper to shortly describe this approach, and to analyse how openEHR archetypes impact on health professionals and semantic interoperability. Analysis of current approaches to EHR systems, terminology and standards developments. In addition to literature reviews, we organised face-to-face and additional telephone interviews and tele-conferences with members of relevant organisations and committees. The openEHR archetypes approach enables syntactic interoperability and semantic interpretability -- both important prerequisites for semantic interoperability. Archetypes enable the formal definition of clinical content by clinicians. To enable comprehensive semantic interoperability, the development and maintenance of archetypes needs to be coordinated internationally and across health professions. Domain knowledge governance comprises a set of processes that enable the creation, development, organisation, sharing, dissemination, use and continuous maintenance of archetypes. It needs to be supported by information technology. To enable EHRs, semantic interoperability is essential. The openEHR archetypes approach enables syntactic interoperability and semantic interpretability. However, without coordinated archetype development and maintenance, 'rank growth' of archetypes would jeopardize semantic interoperability. We therefore believe that openEHR archetypes and domain knowledge governance together create the knowledge environment required to adopt EHRs.
Grimm, Ryan P; Solari, Emily J; McIntyre, Nancy S; Zajic, Matthew; Mundy, Peter C
2018-04-01
Many children with autism spectrum disorders (ASD) struggle with reading comprehension. Linguistic comprehension is an important predictor of reading comprehension, especially as children progress through elementary school and later grades. Yet, there is a dearth of research examining longitudinal relations between linguistic comprehensions in school-age children with ASD compared to typically-developing peers (TD). This study compared the developmental trajectories of linguistic and reading comprehension in samples of children with ASD and age-matched TD peers. Both groups were administered measures of linguistic and reading comprehension multiple times over a 30-month period. Latent growth curve modeling demonstrated children with ASD performed at significantly lower levels on both measures at the first timepoint and these deficits persisted across time. Children with ASD exhibited growth in both skills comparable to their TD peers, but this was not sufficient to enable them to eventually achieve at a level similar to the TD group. Due to the wide age range of the sample, age was controlled and displayed significant effects. Findings suggest linguistic comprehension skills are related to reading comprehension in children with ASD, similar to TD peers. Further, intervention in linguistic comprehension skills for children with ASD should begin early and there may be a finite window in which these skills are malleable, in terms of improving reading comprehension skills. Autism Res 2018, 11: 624-635. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. There is relatively little research concerning reading comprehension development in children with ASD and how they compare to TD peers. This study found children with ASD began at lower achievement levels of linguistic comprehension and reading comprehension than TD peers, but the skills developed at a similar rate. Intervening early and raising initial levels of linguistic and reading comprehension may enable children with ASD to perform similarly to TD peers over time. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Goldberg, Lynette R; Brown, Gina R; Mosack, Victoria A; Fletcher, Phyllis A
2015-01-01
This study analyzed students' written reflections following their initial exposure to interprofessional teamwork in case-based problem-solving. A three-hour seminar featuring three sequenced scenarios was developed and offered 12-times over two semesters. A total of 305 students from a variety of healthcare programs worked together with standardized patients in an on-campus laboratory simulating hospital ward and rehabilitation settings. A thematic analysis of students' reflections showed that they valued the shared learning and realistic case study. However, they felt the experience would be strengthened by working in smaller, more representative teams that included students from medicine, psychology, and social work to enable more effective communication and comprehensive case discussion. While useful for future planning, the identified themes did not enable a comparative statistical analysis of what students found helpful and difficult and a re-coding of students' responses now is underway. Implications for measuring the effectiveness of future interprofessional case-based learning center on addressing the identified weaknesses, and establishing a research design that enables a comparison of pre- and post-seminar data, and the effectiveness of the IPE experience compared to profession-specific experiences.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms.
Jaffe, Jacob D; Feeney, Caitlin M; Patel, Jinal; Lu, Xiaodong; Mani, D R
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques. Graphical Abstract ᅟ.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Jaffe, Jacob D.; Feeney, Caitlin M.; Patel, Jinal; Lu, Xiaodong; Mani, D. R.
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques.
Comprehensive Truck Size and Weight Study : volume 2 : issues and background
DOT National Transportation Integrated Search
2014-01-01
Connected vehicle wireless data communications can enable safety applications that may reduce injuries and fatalities suffered on our roads and highways, as well as enabling reductions in traffic congestion and impacts on the environment. As a critic...
Molecular inversion probe assay.
Absalan, Farnaz; Ronaghi, Mostafa
2007-01-01
We have described molecular inversion probe technologies for large-scale genetic analyses. This technique provides a comprehensive and powerful tool for the analysis of genetic variation and enables affordable, large-scale studies that will help uncover the genetic basis of complex disease and explain the individual variation in response to therapeutics. Major applications of the molecular inversion probes (MIP) technologies include targeted genotyping from focused regions to whole-genome studies, and allele quantification of genomic rearrangements. The MIP technology (used in the HapMap project) provides an efficient, scalable, and affordable way to score polymorphisms in case/control populations for genetic studies. The MIP technology provides the highest commercially available multiplexing levels and assay conversion rates for targeted genotyping. This enables more informative, genome-wide studies with either the functional (direct detection) approach or the indirect detection approach.
Genomics and metagenomics in medical microbiology.
Padmanabhan, Roshan; Mishra, Ajay Kumar; Raoult, Didier; Fournier, Pierre-Edouard
2013-12-01
Over the last two decades, sequencing tools have evolved from laborious time-consuming methodologies to real-time detection and deciphering of genomic DNA. Genome sequencing, especially using next generation sequencing (NGS) has revolutionized the landscape of microbiology and infectious disease. This deluge of sequencing data has not only enabled advances in fundamental biology but also helped improve diagnosis, typing of pathogen, virulence and antibiotic resistance detection, and development of new vaccines and culture media. In addition, NGS also enabled efficient analysis of complex human micro-floras, both commensal, and pathological, through metagenomic methods, thus helping the comprehension and management of human diseases such as obesity. This review summarizes technological advances in genomics and metagenomics relevant to the field of medical microbiology. Copyright © 2013 Elsevier B.V. All rights reserved.
Graphical Modeling Meets Systems Pharmacology.
Lombardo, Rosario; Priami, Corrado
2017-01-01
A main source of failures in systems projects (including systems pharmacology) is poor communication level and different expectations among the stakeholders. A common and not ambiguous language that is naturally comprehensible by all the involved players is a boost to success. We present bStyle, a modeling tool that adopts a graphical language close enough to cartoons to be a common media to exchange ideas and data and that it is at the same time formal enough to enable modeling, analysis, and dynamic simulations of a system. Data analysis and simulation integrated in the same application are fundamental to understand the mechanisms of actions of drugs: a core aspect of systems pharmacology.
Graphical Modeling Meets Systems Pharmacology
Lombardo, Rosario; Priami, Corrado
2017-01-01
A main source of failures in systems projects (including systems pharmacology) is poor communication level and different expectations among the stakeholders. A common and not ambiguous language that is naturally comprehensible by all the involved players is a boost to success. We present bStyle, a modeling tool that adopts a graphical language close enough to cartoons to be a common media to exchange ideas and data and that it is at the same time formal enough to enable modeling, analysis, and dynamic simulations of a system. Data analysis and simulation integrated in the same application are fundamental to understand the mechanisms of actions of drugs: a core aspect of systems pharmacology. PMID:28469411
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena
2016-03-01
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Developing Comprehension through Author Awareness.
ERIC Educational Resources Information Center
Krieger, Evelyn
1990-01-01
Discusses how teachers can help students learn to read with a sense of the author--who wrote the book, how, and why. Argues that building a schema for various genres and more sophisticated writing techniques strengthens comprehension and enables students to enjoy books on their own. (RS)
Evaluation of variability in high-resolution protein structures by global distance scoring.
Anzai, Risa; Asami, Yoshiki; Inoue, Waka; Ueno, Hina; Yamada, Koya; Okada, Tetsuji
2018-01-01
Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.
Definitive screening design enables optimization of LC-ESI-MS/MS parameters in proteomics.
Aburaya, Shunsuke; Aoki, Wataru; Minakuchi, Hiroyoshi; Ueda, Mitsuyoshi
2017-12-01
In proteomics, more than 100,000 peptides are generated from the digestion of human cell lysates. Proteome samples have a broad dynamic range in protein abundance; therefore, it is critical to optimize various parameters of LC-ESI-MS/MS to comprehensively identify these peptides. However, there are many parameters for LC-ESI-MS/MS analysis. In this study, we applied definitive screening design to simultaneously optimize 14 parameters in the operation of monolithic capillary LC-ESI-MS/MS to increase the number of identified proteins and/or the average peak area of MS1. The simultaneous optimization enabled the determination of two-factor interactions between LC and MS. Finally, we found two parameter sets of monolithic capillary LC-ESI-MS/MS that increased the number of identified proteins by 8.1% or the average peak area of MS1 by 67%. The definitive screening design would be highly useful for high-throughput analysis of the best parameter set in LC-ESI-MS/MS systems.
An interactive environment for agile analysis and visualization of ChIP-sequencing data.
Lerdrup, Mads; Johansen, Jens Vilstrup; Agrawal-Singh, Shuchi; Hansen, Klaus
2016-04-01
To empower experimentalists with a means for fast and comprehensive chromatin immunoprecipitation sequencing (ChIP-seq) data analyses, we introduce an integrated computational environment, EaSeq. The software combines the exploratory power of genome browsers with an extensive set of interactive and user-friendly tools for genome-wide abstraction and visualization. It enables experimentalists to easily extract information and generate hypotheses from their own data and public genome-wide datasets. For demonstration purposes, we performed meta-analyses of public Polycomb ChIP-seq data and established a new screening approach to analyze more than 900 datasets from mouse embryonic stem cells for factors potentially associated with Polycomb recruitment. EaSeq, which is freely available and works on a standard personal computer, can substantially increase the throughput of many analysis workflows, facilitate transparency and reproducibility by automatically documenting and organizing analyses, and enable a broader group of scientists to gain insights from ChIP-seq data.
Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators
Bodenmiller, Bernd; Zunder, Eli R.; Finck, Rachel; Chen, Tiffany J.; Savig, Erica S.; Bruggner, Robert V.; Simonds, Erin F.; Bendall, Sean C.; Sachs, Karen; Krutzik, Peter O.; Nolan, Garry P.
2013-01-01
The ability to comprehensively explore the impact of bio-active molecules on human samples at the single-cell level can provide great insight for biomedical research. Mass cytometry enables quantitative single-cell analysis with deep dimensionality, but currently lacks high-throughput capability. Here we report a method termed mass-tag cellular barcoding (MCB) that increases mass cytometry throughput by sample multiplexing. 96-well format MCB was used to characterize human peripheral blood mononuclear cell (PBMC) signaling dynamics, cell-to-cell communication, the signaling variability between 8 donors, and to define the impact of 27 inhibitors on this system. For each compound, 14 phosphorylation sites were measured in 14 PBMC types, resulting in 18,816 quantified phosphorylation levels from each multiplexed sample. This high-dimensional systems-level inquiry allowed analysis across cell-type and signaling space, reclassified inhibitors, and revealed off-target effects. MCB enables high-content, high-throughput screening, with potential applications for drug discovery, pre-clinical testing, and mechanistic investigation of human disease. PMID:22902532
ERIC Educational Resources Information Center
Chiang, Hanley; Walsh, Elias; Shanahan, Timothy; Gentile, Claudia; Maccarone, Alyssa; Waits, Tiffany; Carlson, Barbara; Rikoon, Samuel
2017-01-01
Reading comprehension--the ability to understand the meaning of text--is a foundational ability that enables children to learn in school and throughout life. Children who struggle with reading comprehension in the third or fourth grade are at high risk for dropping out of school, with detrimental effects on their future employment, income, and…
Developing Reading Comprehension with Moving Image Narratives
ERIC Educational Resources Information Center
Maine, Fiona; Shields, Robin
2015-01-01
This paper reports the findings from a small-scale exploratory study that investigated how moving-image narratives might enable children to develop transferable reading comprehension strategies. Using short, animated, narrative films, 28 primary-aged children engaged in a 10-week programme that included the explicit instruction of comprehension…
COMPREHENSIVE PBPK MODELING APPROACH USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM)
ERDEM, a complex PBPK modeling system, is the result of the implementation of a comprehensive PBPK modeling approach. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. It efficiently ...
Cajka, Tomás; Hajslová, Jana; Cochran, Jack; Holadová, Katerina; Klimánková, Eva
2007-03-01
Head-space solid phase microextration (SPME), followed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GCxGC-TOFMS), has been implemented for the analysis of honey volatiles, with emphasis on the optimal selection of SPME fibre and the first- and second-dimension GC capillaries. From seven SPME fibres investigated, a divinylbenzene/Carboxen/polydimethylsiloxane (DVB/CAR/PDMS) 50/30 microm fibre provided the best sorption capacity and the broadest range of volatiles extracted from the headspace of a mixed honey sample. A combination of DB-5ms x SUPELCOWAX 10 columns enabled the best resolution of sample components compared to the other two tested column configurations. Employing this powerful analytical strategy led to the identification of 164 volatile compounds present in a honey mixture during a 19-min GC run. Combination of this simple and inexpensive SPME-based sampling/concentration technique with the advanced separation/identification approach represented by GCxGC-TOFMS allows a rapid and comprehensive examination of the honey volatiles profile. In this way, the laboratory sample throughput can be increased significantly and, at the same time, the risk of erroneous identification, which cannot be avoided in one-dimensional GC separation, is minimised.
MIPS: analysis and annotation of proteins from whole genomes in 2005
Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839
MIPS: analysis and annotation of proteins from whole genomes in 2005.
Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).
NuGO contributions to GenePattern
Reiff, C.; Mayer, C.; Müller, M.
2008-01-01
NuGO, the European Nutrigenomics Organization, utilizes 31 powerful computers for, e.g., data storage and analysis. These so-called black boxes (NBXses) are located at the sites of different partners. NuGO decided to use GenePattern as the preferred genomic analysis tool on each NBX. To handle the custom made Affymetrix NuGO arrays, new NuGO modules are added to GenePattern. These NuGO modules execute the latest Bioconductor version ensuring up-to-date annotations and access to the latest scientific developments. The following GenePattern modules are provided by NuGO: NuGOArrayQualityAnalysis for comprehensive quality control, NuGOExpressionFileCreator for import and normalization of data, LimmaAnalysis for identification of differentially expressed genes, TopGoAnalysis for calculation of GO enrichment, and GetResultForGo for retrieval of information on genes associated with specific GO terms. All together, these NuGO modules allow comprehensive, up-to-date, and user friendly analysis of Affymetrix data. A special feature of the NuGO modules is that for analysis they allow the use of either the standard Affymetrix or the MBNI custom CDF-files, which remap probes based on current knowledge. In both cases a .chip-file is created to enable GSEA analysis. The NuGO GenePattern installations are distributed as binary Ubuntu (.deb) packages via the NuGO repository. PMID:19034553
NuGO contributions to GenePattern.
De Groot, P J; Reiff, C; Mayer, C; Müller, M
2008-12-01
NuGO, the European Nutrigenomics Organization, utilizes 31 powerful computers for, e.g., data storage and analysis. These so-called black boxes (NBXses) are located at the sites of different partners. NuGO decided to use GenePattern as the preferred genomic analysis tool on each NBX. To handle the custom made Affymetrix NuGO arrays, new NuGO modules are added to GenePattern. These NuGO modules execute the latest Bioconductor version ensuring up-to-date annotations and access to the latest scientific developments. The following GenePattern modules are provided by NuGO: NuGOArrayQualityAnalysis for comprehensive quality control, NuGOExpressionFileCreator for import and normalization of data, LimmaAnalysis for identification of differentially expressed genes, TopGoAnalysis for calculation of GO enrichment, and GetResultForGo for retrieval of information on genes associated with specific GO terms. All together, these NuGO modules allow comprehensive, up-to-date, and user friendly analysis of Affymetrix data. A special feature of the NuGO modules is that for analysis they allow the use of either the standard Affymetrix or the MBNI custom CDF-files, which remap probes based on current knowledge. In both cases a .chip-file is created to enable GSEA analysis. The NuGO GenePattern installations are distributed as binary Ubuntu (.deb) packages via the NuGO repository.
DOT National Transportation Integrated Search
2014-03-01
An interdisciplinary team of six faculty members and six students at the University of Detroit Mercy (UDM) conducted a : comprehensive study of the factors enabling or inhibiting development of effective regional transit. Focusing on Metro Detroit an...
Enable: Developing Instructional Language Skills.
ERIC Educational Resources Information Center
Witt, Beth
The program presented in this manual provides a structure and activities for systematic development of effective listening comprehension in typical and atypical children. The complete ENABLE kit comes with pictures, cut-outs, and puppets to illustrate the directives, questions, and narrative activities. The manual includes an organizational and…
Information transfer satellite concept study. Volume 4: computer manual
NASA Technical Reports Server (NTRS)
Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.
1971-01-01
The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.
The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...
ERIC Educational Resources Information Center
Fischer, Robert
The report details development, at Southwest Texas State University and later at Pennsylvania State University, of a computer authoring system ("Libra") enabling foreign language faculty to develop multimedia lessons focusing on listening comprehension. Staff at Southwest Texas State University first developed a Macintosh version of the…
Combining data from multiple sources using the CUAHSI Hydrologic Information System
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Ames, D. P.; Horsburgh, J. S.; Goodall, J. L.
2012-12-01
The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has developed a Hydrologic Information System (HIS) to provide better access to data by enabling the publication, cataloging, discovery, retrieval, and analysis of hydrologic data using web services. The CUAHSI HIS is an Internet based system comprised of hydrologic databases and servers connected through web services as well as software for data publication, discovery and access. The HIS metadata catalog lists close to 100 web services registered to provide data through this system, ranging from large federal agency data sets to experimental watersheds managed by University investigators. The system's flexibility in storing and enabling public access to similarly formatted data and metadata has created a community data resource from governmental and academic data that might otherwise remain private or analyzed only in isolation. Comprehensive understanding of hydrology requires integration of this information from multiple sources. HydroDesktop is the client application developed as part of HIS to support data discovery and access through this system. HydroDesktop is founded on an open source GIS client and has a plug-in architecture that has enabled the integration of modeling and analysis capability with the functionality for data discovery and access. Model integration is possible through a plug-in built on the OpenMI standard and data visualization and analysis is supported by an R plug-in. This presentation will demonstrate HydroDesktop, showing how it provides an analysis environment within which data from multiple sources can be discovered, accessed and integrated.
A DNA 'barcode blitz': rapid digitization and sequencing of a natural history collection.
Hebert, Paul D N; Dewaard, Jeremy R; Zakharov, Evgeny V; Prosser, Sean W J; Sones, Jayme E; McKeown, Jaclyn T A; Mantle, Beth; La Salle, John
2013-01-01
DNA barcoding protocols require the linkage of each sequence record to a voucher specimen that has, whenever possible, been authoritatively identified. Natural history collections would seem an ideal resource for barcode library construction, but they have never seen large-scale analysis because of concerns linked to DNA degradation. The present study examines the strength of this barrier, carrying out a comprehensive analysis of moth and butterfly (Lepidoptera) species in the Australian National Insect Collection. Protocols were developed that enabled tissue samples, specimen data, and images to be assembled rapidly. Using these methods, a five-person team processed 41,650 specimens representing 12,699 species in 14 weeks. Subsequent molecular analysis took about six months, reflecting the need for multiple rounds of PCR as sequence recovery was impacted by age, body size, and collection protocols. Despite these variables and the fact that specimens averaged 30.4 years old, barcode records were obtained from 86% of the species. In fact, one or more barcode compliant sequences (>487 bp) were recovered from virtually all species represented by five or more individuals, even when the youngest was 50 years old. By assembling specimen images, distributional data, and DNA barcode sequences on a web-accessible informatics platform, this study has greatly advanced accessibility to information on thousands of species. Moreover, much of the specimen data became publically accessible within days of its acquisition, while most sequence results saw release within three months. As such, this study reveals the speed with which DNA barcode workflows can mobilize biodiversity data, often providing the first web-accessible information for a species. These results further suggest that existing collections can enable the rapid development of a comprehensive DNA barcode library for the most diverse compartment of terrestrial biodiversity - insects.
Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment
NASA Technical Reports Server (NTRS)
Basili, V. R.; Rombach, H. D.
1988-01-01
Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.
Fire fit: assessing comprehensive fitness and injury risk in the fire service.
Poplin, Gerald S; Roe, Denise J; Burgess, Jefferey L; Peate, Wayne F; Harris, Robin B
2016-02-01
This study sought to develop a comprehensive measure of fitness that is predictive of injury risk and can be used in the fire service to assess individual-level health and fit-for-duty status. A retrospective occupational cohort of 799 career fire service employees was observed over the years 2005-2009. An equally weighted score for comprehensive fitness was calculated based on cardiovascular fitness, muscular strength, endurance, flexibility, and body composition. Repeated measures survival analyses were used to estimate the risk of any injury, sprain or strain, and exercise-related injuries in relation to comprehensive fitness. A well-distributed comprehensive fitness score was developed to distinguish three tiers of overall fitness status. Intraclass correlations identified flexibility, total grip strength, percent body fat, and resting heart rate as the most reliable fitness metrics, while push-ups, sit-ups, and aerobic capacity demonstrated poor reliability. In general, individuals with a lower comprehensive fitness status had an increased injury risk of injury as compared to the most fit individuals. The risk of any injury was 1.82 (95% CI 1.06-3.11) times as likely for the least fit individuals, as compared to individuals in the top fire fitness category, increasing to 2.90 (95% CI 1.48-5.66) when restricted to sprains and strains. This 5-year analysis of clinical occupational health assessments enabled the development of a relevant metric for relating comprehensive fitness with the risk of injury. Results were consistent with previous studies focused on cardiorespiratory fitness, but also less susceptible to inter-individual variability of discrete measurements.
NASA Technical Reports Server (NTRS)
1973-01-01
An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.
MetaGenyo: a web tool for meta-analysis of genetic association studies.
Martorell-Marugan, Jordi; Toro-Dominguez, Daniel; Alarcon-Riquelme, Marta E; Carmona-Saez, Pedro
2017-12-16
Genetic association studies (GAS) aims to evaluate the association between genetic variants and phenotypes. In the last few years, the number of this type of study has increased exponentially, but the results are not always reproducible due to experimental designs, low sample sizes and other methodological errors. In this field, meta-analysis techniques are becoming very popular tools to combine results across studies to increase statistical power and to resolve discrepancies in genetic association studies. A meta-analysis summarizes research findings, increases statistical power and enables the identification of genuine associations between genotypes and phenotypes. Meta-analysis techniques are increasingly used in GAS, but it is also increasing the amount of published meta-analysis containing different errors. Although there are several software packages that implement meta-analysis, none of them are specifically designed for genetic association studies and in most cases their use requires advanced programming or scripting expertise. We have developed MetaGenyo, a web tool for meta-analysis in GAS. MetaGenyo implements a complete and comprehensive workflow that can be executed in an easy-to-use environment without programming knowledge. MetaGenyo has been developed to guide users through the main steps of a GAS meta-analysis, covering Hardy-Weinberg test, statistical association for different genetic models, analysis of heterogeneity, testing for publication bias, subgroup analysis and robustness testing of the results. MetaGenyo is a useful tool to conduct comprehensive genetic association meta-analysis. The application is freely available at http://bioinfo.genyo.es/metagenyo/ .
Application of Ontology Technology in Health Statistic Data Analysis.
Guo, Minjiang; Hu, Hongpu; Lei, Xingyun
2017-01-01
Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.
Ploug, Thomas; Holm, Søren
2012-11-16
Denmark has implemented a comprehensive, nationwide pharmaceutical information system, and this system has been evaluated by the Danish Council of Ethics. The system can be seen as an exemplar of a comprehensive health information system for clinical use. The paper analyses 1) how informed consent can be implemented in the system and how different implementations create different impacts on autonomy and control of information, and 2) arguments directed towards justifying not seeking informed consent in this context. Based on the analysis a heuristic is provided which enables a ranking and estimation of the impact on autonomy and control of information of different options for consent to entry of data into the system and use of data from the system.The danger of routinisation of consent is identified.The Danish pharmaceutical information system raises issues in relation to autonomy and control of information, issues that will also occur in relation to other similar comprehensive health information systems. Some of these issues are well understood and their impact can be judged using the heuristic which is provided. More research is, however needed in relation to routinisation of consent.
Shohaimi, Shamarina; Yoke Wei, Wong; Mohd Shariff, Zalilah
2014-01-01
Comprehensive feeding practices questionnaire (CFPQ) is an instrument specifically developed to evaluate parental feeding practices. It has been confirmed among children in America and applied to populations in France, Norway, and New Zealand. In order to extend the application of CFPQ, we conducted a factor structure validation of the translated version of CFPQ (CFPQ-M) using confirmatory factor analysis among mothers of primary school children (N = 397) in Malaysia. Several items were modified for cultural adaptation. Of 49 items, 39 items with loading factors >0.40 were retained in the final model. The confirmatory factor analysis revealed that the final model (twelve-factor model with 39 items and 2 error covariances) displayed the best fit for our sample (Chi-square = 1147; df = 634; P < 0.05; CFI = 0.900; RMSEA = 0.045; SRMR = 0.0058). The instrument with some modifications was confirmed among mothers of school children in Malaysia. The present study extends the usability of the CFPQ and enables researchers and parents to better understand the relationships between parental feeding practices and related problems such as childhood obesity. PMID:25538958
Centrifugal microfluidic platforms: advanced unit operations and applications.
Strohmeier, O; Keller, M; Schwemmer, F; Zehnle, S; Mark, D; von Stetten, F; Zengerle, R; Paust, N
2015-10-07
Centrifugal microfluidics has evolved into a mature technology. Several major diagnostic companies either have products on the market or are currently evaluating centrifugal microfluidics for product development. The fields of application are widespread and include clinical chemistry, immunodiagnostics and protein analysis, cell handling, molecular diagnostics, as well as food, water, and soil analysis. Nevertheless, new fluidic functions and applications that expand the possibilities of centrifugal microfluidics are being introduced at a high pace. In this review, we first present an up-to-date comprehensive overview of centrifugal microfluidic unit operations. Then, we introduce the term "process chain" to review how these unit operations can be combined for the automation of laboratory workflows. Such aggregation of basic functionalities enables efficient fluidic design at a higher level of integration. Furthermore, we analyze how novel, ground-breaking unit operations may foster the integration of more complex applications. Among these are the storage of pneumatic energy to realize complex switching sequences or to pump liquids radially inward, as well as the complete pre-storage and release of reagents. In this context, centrifugal microfluidics provides major advantages over other microfluidic actuation principles: the pulse-free inertial liquid propulsion provided by centrifugal microfluidics allows for closed fluidic systems that are free of any interfaces to external pumps. Processed volumes are easily scalable from nanoliters to milliliters. Volume forces can be adjusted by rotation and thus, even for very small volumes, surface forces may easily be overcome in the centrifugal gravity field which enables the efficient separation of nanoliter volumes from channels, chambers or sensor matrixes as well as the removal of any disturbing bubbles. In summary, centrifugal microfluidics takes advantage of a comprehensive set of fluidic unit operations such as liquid transport, metering, mixing and valving. The available unit operations cover the entire range of automated liquid handling requirements and enable efficient miniaturization, parallelization, and integration of assays.
Making the Grade: The Importance of Academic Enablers in the Elementary School Counseling Program
ERIC Educational Resources Information Center
Barna, Jennifer S.; Brott, Pamelia E.
2014-01-01
Elementary school counselors can support academic achievement by connecting their comprehensive programs to increasing academic competence. One valuable framework focuses on academic enablers, which are identified as interpersonal skills, motivation, engagement, and study skills (DiPerna, 2004). In this article, the authors (a) discuss the…
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data
Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot
2012-01-01
In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267
DNA "nano-claw": logic-based autonomous cancer targeting and therapy.
You, Mingxu; Peng, Lu; Shao, Na; Zhang, Liqin; Qiu, Liping; Cui, Cheng; Tan, Weihong
2014-01-29
Cell types, both healthy and diseased, can be classified by inventories of their cell-surface markers. Programmable analysis of multiple markers would enable clinicians to develop a comprehensive disease profile, leading to more accurate diagnosis and intervention. As a first step to accomplish this, we have designed a DNA-based device, called "Nano-Claw". Combining the special structure-switching properties of DNA aptamers with toehold-mediated strand displacement reactions, this claw is capable of performing autonomous logic-based analysis of multiple cancer cell-surface markers and, in response, producing a diagnostic signal and/or targeted photodynamic therapy. We anticipate that this design can be widely applied in facilitating basic biomedical research, accurate disease diagnosis, and effective therapy.
ERIC Educational Resources Information Center
LaPointe, Michelle; Stullich, Stephanie
2004-01-01
The Comprehensive School Reform (CSR) program provides financial assistance to help schools develop and implement systematic approaches to schoolwide improvement that are grounded in scientifically based research and effective practices. The goal of the program is to enable all children to meet challenging state academic content and achievement…
ERIC Educational Resources Information Center
Evmenova, Anya S.; Behrmann, Michael M.
2014-01-01
There is a great need for new innovative tools to integrate individuals with intellectual disability into educational experiences. This multiple baseline study examined the effects of various adaptations for improving factual and inferential comprehension of non-fiction videos by six postsecondary students with intellectual disability. Video…
Using a Tablet PC in the German Classroom to Enliven Teacher Input
ERIC Educational Resources Information Center
Van Orden, Stephen
2006-01-01
Providing students with lively, authentic comprehensible input is one of the most important tasks of introductory German teachers. Using a Tablet PC can enable teachers to improve the quality of the comprehensible input they provide their students. This article describes how integrating a Tablet PC into daily teaching processes allows classroom…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.
2009-09-25
Under funding from the Department of Energy Office of Nuclear Energy’s Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods.more » This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.« less
Proteomic Analysis of the Mediator Complex Interactome in Saccharomyces cerevisiae
Uthe, Henriette; Vanselow, Jens T.; Schlosser, Andreas
2017-01-01
Here we present the most comprehensive analysis of the yeast Mediator complex interactome to date. Particularly gentle cell lysis and co-immunopurification conditions allowed us to preserve even transient protein-protein interactions and to comprehensively probe the molecular environment of the Mediator complex in the cell. Metabolic 15N-labeling thereby enabled stringent discrimination between bona fide interaction partners and nonspecifically captured proteins. Our data indicates a functional role for Mediator beyond transcription initiation. We identified a large number of Mediator-interacting proteins and protein complexes, such as RNA polymerase II, general transcription factors, a large number of transcriptional activators, the SAGA complex, chromatin remodeling complexes, histone chaperones, highly acetylated histones, as well as proteins playing a role in co-transcriptional processes, such as splicing, mRNA decapping and mRNA decay. Moreover, our data provides clear evidence, that the Mediator complex interacts not only with RNA polymerase II, but also with RNA polymerases I and III, and indicates a functional role of the Mediator complex in rRNA processing and ribosome biogenesis. PMID:28240253
Comprehensive pulsed electric field (PEF) system analysis for microalgae processing.
Buchmann, Leandro; Bloch, Robin; Mathys, Alexander
2018-06-07
Pulsed electric field (PEF) is an emerging nonthermal technique with promising applications in microalgae biorefinery concepts. In this work, the flow field in continuous PEF processing and its influencing factors were analyzed and energy input distributions in PEF treatment chambers were investigated. The results were obtained using an interdisciplinary approach that combined multiphysics simulations with ultrasonic Doppler velocity profiling (UVP) and rheological measurements of Arthrospira platensis suspensions as a case study for applications in the biobased industry. UVP enabled non-invasive validation of multiphysics simulations. A. platensis suspensions follow a non-Newtonian, shear-thinning behavior, and measurement data could be fitted with rheological functions, which were used as an input for fluid dynamics simulations. Within the present work, a comprehensive system characterization was achieved that will facilitate research in the field of PEF processing. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A comprehensive pathway map of epidermal growth factor receptor signaling
Oda, Kanae; Matsuoka, Yukiko; Funahashi, Akira; Kitano, Hiroaki
2005-01-01
The epidermal growth factor receptor (EGFR) signaling pathway is one of the most important pathways that regulate growth, survival, proliferation, and differentiation in mammalian cells. Reflecting this importance, it is one of the best-investigated signaling systems, both experimentally and computationally, and several computational models have been developed for dynamic analysis. A map of molecular interactions of the EGFR signaling system is a valuable resource for research in this area. In this paper, we present a comprehensive pathway map of EGFR signaling and other related pathways. The map reveals that the overall architecture of the pathway is a bow-tie (or hourglass) structure with several feedback loops. The map is created using CellDesigner software that enables us to graphically represent interactions using a well-defined and consistent graphical notation, and to store it in Systems Biology Markup Language (SBML). PMID:16729045
Competencies to enable learning-focused clinical supervision: a thematic analysis of the literature.
Pront, Leeanne; Gillham, David; Schuwirth, Lambert W T
2016-04-01
Clinical supervision is essential for development of health professional students and widely recognised as a significant factor influencing student learning. Although considered important, delivery is often founded on personal experience or a series of predetermined steps that offer standardised behavioural approaches. Such a view may limit the capacity to promote individualised student learning in complex clinical environments. The objective of this review was to develop a comprehensive understanding of what is considered 'good' clinical supervision, within health student education. The literature provides many perspectives, so collation and interpretation were needed to aid development and understanding for all clinicians required to perform clinical supervision within their daily practice. A comprehensive thematic literature review was carried out, which included a variety of health disciplines and geographical environments. Literature addressing 'good' clinical supervision consists primarily of descriptive qualitative research comprising mostly small studies that repeated descriptions of student and supervisor opinions of 'good' supervision. Synthesis and thematic analysis of the literature resulted in four 'competency' domains perceived to inform delivery of learning-focused or 'good' clinical supervision. Domains understood to promote student learning are co-dependent and include 'to partner', 'to nurture', 'to engage' and 'to facilitate meaning'. Clinical supervision is a complex phenomenon and establishing a comprehensive understanding across health disciplines can influence the future health workforce. The learning-focused clinical supervision domains presented here provide an alternative perspective of clinical supervision of health students. This paper is the first step in establishing a more comprehensive understanding of learning-focused clinical supervision, which may lead to development of competencies for clinical supervision. © 2016 John Wiley & Sons Ltd.
Processing of Space Resources to Enable the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2006-01-01
The NASA human exploration program as directed by the Vision for Exploration (G.W. Bush, Jan. 14,2004) includes developing methods to process materials on the Moon and beyond to enable safe and affordable human exploration. Processing space resources was first popularized (O Neill 1976) as a technically viable, economically feasible means to build city sized habitats and multi GWatt solar power satellites in Earth/Moon space. Although NASA studies found the concepts to be technically reasonable in the post Apollo era (AMES 1979), the front end costs the limits of national or corporate investment. In the last decade analysis of space on has shown it to be economically justifiable even on a relatively small mission or commercial scenario basis. The Mars Reference Mission analysis (JSC 1997) demonstrated that production of return propellant on Mars can enable an order of magnitude decrease in the costs of human Mars missions. Analysis (by M. Duke 2003) shows that production of propellant on the Moon for the Earth based satellite industries can be commercially viable after a human lunar base is established. Similar economic analysis (Rapp 2005) also shows large cost benefits for lunar propellant production for Mars missions and for the use of lunar materials for the production of photovoltaic power (Freundlich 2005). Recent technologies could enable much smaller initial costs, to achieve mass, energy, and life support self sufficiency, than were achievable in the 1970s. If the Exploration Vision program is executed with a front end emphasis on space resources, it could provide a path for human self reliance beyond Earth orbit. This path can lead to an open, non-zero-sum, future for humanity with safer human competition with limitless growth potential. This paper discusses extension of the analysis for space resource utilization, to determine the minimum systems necessary for human self sufficiency and growth off Earth. Such a approach can provide a more compelling and comprehensive path to space resource utilization.
A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing
NASA Technical Reports Server (NTRS)
Grady, Joseph E.
2015-01-01
The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute, represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies to fabricate polymer matrix composite and ceramic matrix composite turbine engine components. The benefits include: 50 weight reduction compared to metallic parts, reduced manufacturing costs, reduced part count and rapid design iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature polymer filaments. The CMC effort uses a binder jet process to fabricate silicon carbide test coupons and demonstration articles. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The research project includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.
TopoMS: Comprehensive topological exploration for molecular and condensed-matter systems.
Bhatia, Harsh; Gyulassy, Attila G; Lordi, Vincenzo; Pask, John E; Pascucci, Valerio; Bremer, Peer-Timo
2018-06-15
We introduce TopoMS, a computational tool enabling detailed topological analysis of molecular and condensed-matter systems, including the computation of atomic volumes and charges through the quantum theory of atoms in molecules, as well as the complete molecular graph. With roots in techniques from computational topology, and using a shared-memory parallel approach, TopoMS provides scalable, numerically robust, and topologically consistent analysis. TopoMS can be used as a command-line tool or with a GUI (graphical user interface), where the latter also enables an interactive exploration of the molecular graph. This paper presents algorithmic details of TopoMS and compares it with state-of-the-art tools: Bader charge analysis v1.0 (Arnaldsson et al., 01/11/17) and molecular graph extraction using Critic2 (Otero-de-la-Roza et al., Comput. Phys. Commun. 2014, 185, 1007). TopoMS not only combines the functionality of these individual codes but also demonstrates up to 4× performance gain on a standard laptop, faster convergence to fine-grid solution, robustness against lattice bias, and topological consistency. TopoMS is released publicly under BSD License. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Chipster: user-friendly analysis software for microarray and other high-throughput data.
Kallio, M Aleksi; Tuimala, Jarno T; Hupponen, Taavi; Klemelä, Petri; Gentile, Massimiliano; Scheinin, Ilari; Koski, Mikko; Käki, Janne; Korpelainen, Eija I
2011-10-14
The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available.
Chipster: user-friendly analysis software for microarray and other high-throughput data
2011-01-01
Background The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Results Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Conclusions Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available. PMID:21999641
Santamaria, Monica; Fosso, Bruno; Licciulli, Flavio; Balech, Bachir; Larini, Ilaria; Grillo, Giorgio; De Caro, Giorgio; Liuni, Sabino
2018-01-01
Abstract A holistic understanding of environmental communities is the new challenge of metagenomics. Accordingly, the amplicon-based or metabarcoding approach, largely applied to investigate bacterial microbiomes, is moving to the eukaryotic world too. Indeed, the analysis of metabarcoding data may provide a comprehensive assessment of both bacterial and eukaryotic composition in a variety of environments, including human body. In this respect, whereas hypervariable regions of the 16S rRNA are the de facto standard barcode for bacteria, the Internal Transcribed Spacer 1 (ITS1) of ribosomal RNA gene cluster has shown a high potential in discriminating eukaryotes at deep taxonomic levels. As metabarcoding data analysis rely on the availability of a well-curated barcode reference resource, a comprehensive collection of ITS1 sequences supplied with robust taxonomies, is highly needed. To address this issue, we created ITSoneDB (available at http://itsonedb.cloud.ba.infn.it/) which in its current version hosts 985 240 ITS1 sequences spanning over 134 000 eukaryotic species. Each ITS1 is mapped on the NCBI reference taxonomy with its start and end positions precisely annotated. ITSoneDB has been developed in agreement to the FAIR guidelines by enabling the users to query and download its content through a simple web-interface and access relevant metadata by cross-linking to European Nucleotide Archive. PMID:29036529
2012-01-01
Background While the genetic transformation of the major cereal crops has become relatively routine, to date only a few reports were published on transgenic triticale, and robust data on T-DNA integration and segregation have not been available in this species. Results Here, we present a comprehensive analysis of stable transgenic winter triticale cv. Bogo carrying the selectable marker gene HYGROMYCIN PHOSPHOTRANSFERASE (HPT) and a synthetic green fluorescent protein gene (gfp). Progeny of four independent transgenic plants were comprehensively investigated with regard to the number of integrated T-DNA copies, the number of plant genomic integration loci, the integrity and functionality of individual T-DNA copies, as well as the segregation of transgenes in T1 and T2 generations, which also enabled us to identify homozygous transgenic lines. The truncation of some integrated T-DNAs at their left end along with the occurrence of independent segregation of multiple T-DNAs unintendedly resulted in a single-copy segregant that is selectable marker-free and homozygous for the gfp gene. The heritable expression of gfp driven by the maize UBI-1 promoter was demonstrated by confocal laser scanning microscopy. Conclusions The used transformation method is a valuable tool for the genetic engineering of triticale. Here we show that comprehensive molecular analyses are required for the correct interpretation of phenotypic data collected from the transgenic plants. PMID:23006412
Hensel, Goetz; Oleszczuk, Sylwia; Daghma, Diaa Eldin S; Zimny, Janusz; Melzer, Michael; Kumlehn, Jochen
2012-09-25
While the genetic transformation of the major cereal crops has become relatively routine, to date only a few reports were published on transgenic triticale, and robust data on T-DNA integration and segregation have not been available in this species. Here, we present a comprehensive analysis of stable transgenic winter triticale cv. Bogo carrying the selectable marker gene HYGROMYCIN PHOSPHOTRANSFERASE (HPT) and a synthetic green fluorescent protein gene (gfp). Progeny of four independent transgenic plants were comprehensively investigated with regard to the number of integrated T-DNA copies, the number of plant genomic integration loci, the integrity and functionality of individual T-DNA copies, as well as the segregation of transgenes in T1 and T2 generations, which also enabled us to identify homozygous transgenic lines. The truncation of some integrated T-DNAs at their left end along with the occurrence of independent segregation of multiple T-DNAs unintendedly resulted in a single-copy segregant that is selectable marker-free and homozygous for the gfp gene. The heritable expression of gfp driven by the maize UBI-1 promoter was demonstrated by confocal laser scanning microscopy. The used transformation method is a valuable tool for the genetic engineering of triticale. Here we show that comprehensive molecular analyses are required for the correct interpretation of phenotypic data collected from the transgenic plants.
ERIC Educational Resources Information Center
Easter, John; And Others
A description is provided of comprehensive Achievement Monitoring (CAM), a tool which enables classroom teachers to function as researchers and evaluators. Part I reviews the CAM philosophy and the section following discusses computerized feedback in CAM operations. The final two portions of the report describes the use of CAM in mathematics…
The Soldier Fitness Tracker: Global Delivery of Comprehensive Soldier Fitness
ERIC Educational Resources Information Center
Fravell, Mike; Nasser, Katherine; Cornum, Rhonda
2011-01-01
Carefully implemented technology strategies are vital to the success of large-scale initiatives such as the U.S. Army's Comprehensive Soldier Fitness (CSF) program. Achieving the U.S. Army's vision for CSF required a robust information technology platform that was scaled to millions of users and that leveraged the Internet to enable global reach.…
Learning to Read Spectra: Teaching Decomposition with Excel in a Scientific Writing Course
ERIC Educational Resources Information Center
Muelleman, Andrew W.; Glaser, Rainer E.
2018-01-01
Literacy requires reading comprehension, and fostering reading skills is an essential prerequisite to and a synergistic enabler of the development of writing skills. Reading comprehension in the chemical sciences not only consists of the understanding of text but also includes the reading and processing of data tables, schemes, and graphs. Thus,…
ERIC Educational Resources Information Center
Engler, Karen S.; MacGregor, Cynthia J.
2018-01-01
At a time when deaf education teacher preparation programs are declining in number, little is known about their actual effectiveness. A phenomenological case study of a graduate-level comprehensive deaf education teacher preparation program at a midwestern university explored empowered and enabled learning of teacher candidates using the Missouri…
ERIC Educational Resources Information Center
Vollands, Stacy R.; And Others
A study evaluated the effect software for self-assessment and management of reading practice had on reading achievement and motivation in two primary schools in Aberdeen, Scotland. The program utilized was The Accelerated Reader (AR) which was designed to enable curriculum based assessment of reading comprehension within the classroom. Students…
Faull, Katherine J; Williams, Craig R
2016-05-01
Aedes notoscriptus and Aedes aegypti are both peri-domestic, invasive container-breeding mosquitoes. While the two potential arboviral vectors are bionomically similar, their sympatric distribution in Australia is limited. In this study, analyses of Ae. aegypti and Ae. notoscriptus eggs were enabled using scanning electron microscopy. Significant variations in egg length to width ratio and outer chorionic cell field morphology between Ae. aegypti and Ae. notoscriptus enabled distinction of the two species. Intraspecific variations in cell field morphology also enabled differentiation of the separate populations of both species, highlighting regional and global variation. Our study provides a comprehensive comparative analysis of inter- and intraspecific egg morphological and morphometric variation between two invasive container-breeding mosquitoes. The results indicate a high degree of intraspecific variation in Ae. notoscriptus egg morphology when compared to the eggs of Ae. aegypti. Comparative morphological analyses of Ae. aegypti and Ae. notoscriptus egg attributes using SEM allows differentiation of the species and may be helpful in understanding egg biology in relation to biotope of origin. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.
2015-01-01
Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240
Deutsch, Eric W; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L
2015-08-01
Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include MS to define protein sequence, protein:protein interactions, and protein PTMs. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative MS proteomics. It supports all major operating systems and instrument vendors via open data formats. Here, we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of MS/MS datasets, as well as some major upcoming features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dielectrophoresis-based microfluidic platforms for cancer diagnostics.
Chan, Jun Yuan; Ahmad Kayani, Aminuddin Bin; Md Ali, Mohd Anuar; Kok, Chee Kuang; Yeop Majlis, Burhanuddin; Hoe, Susan Ling Ling; Marzuki, Marini; Khoo, Alan Soo-Beng; Ostrikov, Kostya Ken; Ataur Rahman, Md; Sriram, Sharath
2018-01-01
The recent advancement of dielectrophoresis (DEP)-enabled microfluidic platforms is opening new opportunities for potential use in cancer disease diagnostics. DEP is advantageous because of its specificity, low cost, small sample volume requirement, and tuneable property for microfluidic platforms. These intrinsic advantages have made it especially suitable for developing microfluidic cancer diagnostic platforms. This review focuses on a comprehensive analysis of the recent developments of DEP enabled microfluidic platforms sorted according to the target cancer cell. Each study is critically analyzed, and the features of each platform, the performance, added functionality for clinical use, and the types of samples, used are discussed. We address the novelty of the techniques, strategies, and design configuration used in improving on existing technologies or previous studies. A summary of comparing the developmental extent of each study is made, and we conclude with a treatment of future trends and a brief summary.
Syntactic Prediction in Language Comprehension: Evidence From Either…or
Staub, Adrian; Clifton, Charles
2006-01-01
Readers’ eye movements were monitored as they read sentences in which two noun phrases or two independent clauses were connected by the word or (NP-coordination and S-coordination, respectively). The word either could be present or absent earlier in the sentence. When either was present, the material immediately following or was read more quickly, across both sentence types. In addition, there was evidence that readers misanalyzed the S-coordination structure as an NP-coordination structure only when either was absent. The authors interpret the results as indicating that the word either enabled readers to predict the arrival of a coordination structure; this predictive activation facilitated processing of this structure when it ultimately arrived, and in the case of S-coordination sentences, enabled readers to avoid the incorrect NP-coordination analysis. The authors argue that these results support parsing theories according to which the parser can build predictable syntactic structure before encountering the corresponding lexical input. PMID:16569157
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya
2016-11-01
We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.
Metabolite Profiling and Classification of DNA-Authenticated Licorice Botanicals
Simmler, Charlotte; Anderson, Jeffrey R.; Gauthier, Laura; Lankin, David C.; McAlpine, James B.; Chen, Shao-Nong; Pauli, Guido F.
2015-01-01
Raw licorice roots represent heterogeneous materials obtained from mainly three Glycyrrhiza species. G. glabra, G. uralensis, and G. inflata exhibit marked metabolite differences in terms of flavanones (Fs), chalcones (Cs), and other phenolic constituents. The principal objective of this work was to develop complementary chemometric models for the metabolite profiling, classification, and quality control of authenticated licorice. A total of 51 commercial and macroscopically verified samples were DNA authenticated. Principal component analysis and canonical discriminant analysis were performed on 1H NMR spectra and area under the curve values obtained from UHPLC-UV chromatograms, respectively. The developed chemometric models enable the identification and classification of Glycyrrhiza species according to their composition in major Fs, Cs, and species specific phenolic compounds. Further key outcomes demonstrated that DNA authentication combined with chemometric analyses enabled the characterization of mixtures, hybrids, and species outliers. This study provides a new foundation for the botanical and chemical authentication, classification, and metabolomic characterization of crude licorice botanicals and derived materials. Collectively, the proposed methods offer a comprehensive approach for the quality control of licorice as one of the most widely used botanical dietary supplements. PMID:26244884
NASA Technical Reports Server (NTRS)
Ho, C. Y.; Li, H. H.
1989-01-01
A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.
Shock Layer Radiation Measurements and Analysis for Mars Entry
NASA Technical Reports Server (NTRS)
Bose, Deepak; Grinstead, Jay Henderson; Bogdanoff, David W.; Wright, Michael J.
2009-01-01
NASA's In-Space Propulsion program is supporting the development of shock radiation transport models for aerocapture missions to Mars. A comprehensive test series in the NASA Antes Electric Arc Shock Tube facility at a representative flight condition was recently completed. The facility optical instrumentation enabled spectral measurements of shocked gas radiation from the vacuum ultraviolet to the near infrared. The instrumentation captured the nonequilibrium post-shock excitation and relaxation dynamics of dispersed spectral features. A description of the shock tube facility, optical instrumentation, and examples of the test data are presented. Comparisons of measured spectra with model predictions are also made.
Combustion of Nitramine Propellants
1983-03-01
through development of a comprehensive analytical model. The ultimate goals are to enable prediction of deflagration rate over a wide pressure range...superior in burn rate prediction , both simple models fail in correlating existing temperature- sensitivity data. (2) In the second part, a...auxiliary condition to enable independent burn rate prediction ; improved melt phase model including decomposition-gas bubbles; model for far-field
Opportunities and challenges for the life sciences community.
Kolker, Eugene; Stewart, Elizabeth; Ozdemir, Vural
2012-03-01
Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19-20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16-17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org ) was formed to become a Digital Commons for the life sciences community.
Opportunities and Challenges for the Life Sciences Community
Stewart, Elizabeth; Ozdemir, Vural
2012-01-01
Abstract Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19–20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16–17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org) was formed to become a Digital Commons for the life sciences community. PMID:22401659
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Eke, Iris; Makinde, Adeola Y; Aryankalayil, Molykutty J; Ahmed, Mansoor M; Coleman, C Norman
2016-11-01
New technologies enabling the analysis of various molecules, including DNA, RNA, proteins and small metabolites, can aid in understanding the complex molecular processes in cancer cells. In particular, for the use of novel targeted therapeutics, elucidation of the mechanisms leading to cell death or survival is crucial to eliminate tumor resistance and optimize therapeutic efficacy. While some techniques, such as genomic analysis for identifying specific gene mutations or epigenetic testing of promoter methylation, are already in clinical use, other "omics-based" assays are still evolving. Here, we provide an overview of the current status of molecular profiling methods, including promising research strategies, as well as possible challenges, and their emerging role in radiation oncology. Published by Elsevier Ireland Ltd.
Lessons from single-cell transcriptome analysis of oxygen-sensing cells.
Zhou, Ting; Matsunami, Hiroaki
2018-05-01
The advent of single-cell RNA-sequencing (RNA-Seq) technology has enabled transcriptome profiling of individual cells. Comprehensive gene expression analysis at the single-cell level has proven to be effective in characterizing the most fundamental aspects of cellular function and identity. This unbiased approach is revolutionary for small and/or heterogeneous tissues like oxygen-sensing cells in identifying key molecules. Here, we review the major methods of current single-cell RNA-Seq technology. We discuss how this technology has advanced the understanding of oxygen-sensing glomus cells in the carotid body and helped uncover novel oxygen-sensing cells and mechanisms in the mice olfactory system. We conclude by providing our perspective on future single-cell RNA-Seq research directed at oxygen-sensing cells.
Measuring Speech Comprehensibility in Students with Down Syndrome
Woynaroski, Tiffany; Camarata, Stephen
2016-01-01
Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989
The DNA Methylome of Human Peripheral Blood Mononuclear Cells
Ye, Mingzhi; Zheng, Hancheng; Yu, Jian; Wu, Honglong; Sun, Jihua; Zhang, Hongyu; Chen, Quan; Luo, Ruibang; Chen, Minfeng; He, Yinghua; Jin, Xin; Zhang, Qinghui; Yu, Chang; Zhou, Guangyu; Sun, Jinfeng; Huang, Yebo; Zheng, Huisong; Cao, Hongzhi; Zhou, Xiaoyu; Guo, Shicheng; Hu, Xueda; Li, Xin; Kristiansen, Karsten; Bolund, Lars; Xu, Jiujin; Wang, Wen; Yang, Huanming; Wang, Jian; Li, Ruiqiang; Beck, Stephan; Wang, Jun; Zhang, Xiuqing
2010-01-01
DNA methylation plays an important role in biological processes in human health and disease. Recent technological advances allow unbiased whole-genome DNA methylation (methylome) analysis to be carried out on human cells. Using whole-genome bisulfite sequencing at 24.7-fold coverage (12.3-fold per strand), we report a comprehensive (92.62%) methylome and analysis of the unique sequences in human peripheral blood mononuclear cells (PBMC) from the same Asian individual whose genome was deciphered in the YH project. PBMC constitute an important source for clinical blood tests world-wide. We found that 68.4% of CpG sites and <0.2% of non-CpG sites were methylated, demonstrating that non-CpG cytosine methylation is minor in human PBMC. Analysis of the PBMC methylome revealed a rich epigenomic landscape for 20 distinct genomic features, including regulatory, protein-coding, non-coding, RNA-coding, and repeat sequences. Integration of our methylome data with the YH genome sequence enabled a first comprehensive assessment of allele-specific methylation (ASM) between the two haploid methylomes of any individual and allowed the identification of 599 haploid differentially methylated regions (hDMRs) covering 287 genes. Of these, 76 genes had hDMRs within 2 kb of their transcriptional start sites of which >80% displayed allele-specific expression (ASE). These data demonstrate that ASM is a recurrent phenomenon and is highly correlated with ASE in human PBMCs. Together with recently reported similar studies, our study provides a comprehensive resource for future epigenomic research and confirms new sequencing technology as a paradigm for large-scale epigenomics studies. PMID:21085693
ERIC Educational Resources Information Center
American Youth Policy Forum, 2012
2012-01-01
If the U.S. is to increase the number of college graduates and boost our national competitiveness, we must redouble our efforts to ensure all students graduate from high school prepared for postsecondary learning and careers. This means creating comprehensive education systems that provide learning options that enable a range of pathways to…
Teaching French Vocabulary to English Speaking Students. A Comprehensive and Eclectic Approach.
ERIC Educational Resources Information Center
Howlett, Frederick G.
The greatest need of language teachers today is a workable approach to teaching vocabulary. This is essential if students are to be enabled to achieve communicative competence, that is, to make a transfer from the textbook to the real world of French, as reflected in the French media. An effective and comprehensive approach to teaching vocabulary…
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Schatschneider, Christopher
2016-01-01
The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…
Yanashima, Ryoji; Kitagawa, Noriyuki; Matsubara, Yoshiya; Weatheritt, Robert; Oka, Kotaro; Kikuchi, Shinichi; Tomita, Masaru; Ishizaki, Shun
2009-01-01
The scale-free and small-world network models reflect the functional units of networks. However, when we investigated the network properties of a signaling pathway using these models, no significant differences were found between the original undirected graphs and the graphs in which inactive proteins were eliminated from the gene expression data. We analyzed signaling networks by focusing on those pathways that best reflected cellular function. Therefore, our analysis of pathways started from the ligands and progressed to transcription factors and cytoskeletal proteins. We employed the Python module to assess the target network. This involved comparing the original and restricted signaling cascades as a directed graph using microarray gene expression profiles of late onset Alzheimer's disease. The most commonly used method of shortest-path analysis neglects to consider the influences of alternative pathways that can affect the activation of transcription factors or cytoskeletal proteins. We therefore introduced included k-shortest paths and k-cycles in our network analysis using the Python modules, which allowed us to attain a reasonable computational time and identify k-shortest paths. This technique reflected results found in vivo and identified pathways not found when shortest path or degree analysis was applied. Our module enabled us to comprehensively analyse the characteristics of biomolecular networks and also enabled analysis of the effects of diseases considering the feedback loop and feedforward loop control structures as an alternative path.
The Chemistry of Shocked High-energy Materials: Connecting Atomistic Simulations to Experiments
NASA Astrophysics Data System (ADS)
Islam, Md Mahbubul; Strachan, Alejandro
2017-06-01
A comprehensive atomistic-level understanding of the physics and chemistry of shocked high energy (HE) materials is crucial for designing safe and efficient explosives. Advances in the ultrafast spectroscopy and laser shocks enabled the study of shock-induced chemistry at extreme conditions occurring at picosecond timescales. Despite this progress experiments are not without limitations and do not enable a direct characterization of chemical reactions. At the same time, large-scale reactive molecular dynamics (MD) simulations are capable of providing description of the shocked-induced chemistry but the uncertainties resulting from the use of approximate descriptions of atomistic interactions remain poorly quantified. We use ReaxFF MD simulations to investigate the shock and temperature induced chemical decomposition mechanisms of polyvinyl nitrate, RDX, and nitromethane. The effect of various shock pressures on reaction initiation mechanisms is investigated for all three materials. We performed spectral analysis from atomistic velocities at different shock pressures to enable direct comparison with experiments. The simulations predict volume-increasing reactions at the shock-to-detonation transitions and the shock vs. particle velocity data are in good agreement with available experimental data. The ReaxFF MD simulations validated against experiments enabled prediction of reaction kinetics of shocked materials, and interpretation of experimental spectroscopy data via assignment of the spectral peaks to dictate various reaction pathways at extreme conditions.
NASA Astrophysics Data System (ADS)
Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron
2014-03-01
Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.
Almasri, Nihad A; An, Mihee; Palisano, Robert J
2017-07-28
Understanding parent perceptions of family-centered care (FCC) is important to improve processes and outcomes of children's services. A systematic review and meta-analysis of research on the Measures of Processes of Care (MPOC-20) were performed to determine the extent parents of children with physical disabilities perceive they received FCC. A comprehensive literature search was conducted using four databases. A total of 129 studies were retrieved; 15 met the criteria for the synthesis. Meta-analysis involving 2,582 mothers and fathers of children with physical disabilities mainly cerebral palsy was conducted for the five scales of the MPOC-20. Aggregated mean ratings varied from 5.0 to 5.5 for Providing Specific Information about the Child; Coordinated and Comprehensive Care; and Respectful and Supportive Care (relational behaviors) and Enabling and Partnership (participatory behaviors) indicating that, on average, parents rated FCC as having been provided to "a fairly great extent." The aggregated mean rating was 4.1 for Providing General Information, indicating FCC was provided "to a moderate extent." Service providers are encouraged to focus on child and family needs for general information. Research is needed to better understand parent perspectives of service provider participatory behaviors which are important for engaging families in intervention processes.
Analysis of the clonal repertoire of gene-corrected cells in gene therapy.
Paruzynski, Anna; Glimm, Hanno; Schmidt, Manfred; Kalle, Christof von
2012-01-01
Gene therapy-based clinical phase I/II studies using integrating retroviral vectors could successfully treat different monogenetic inherited diseases. However, with increased efficiency of this therapy, severe side effects occurred in various gene therapy trials. In all cases, integration of the vector close to or within a proto-oncogene contributed substantially to the development of the malignancies. Thus, the in-depth analysis of integration site patterns is of high importance to uncover potential clonal outgrowth and to assess the safety of gene transfer vectors and gene therapy protocols. The standard and nonrestrictive linear amplification-mediated PCR (nrLAM-PCR) in combination with high-throughput sequencing exhibits technologies that allow to comprehensively analyze the clonal repertoire of gene-corrected cells and to assess the safety of the used vector system at an early stage on the molecular level. It enables clarifying the biological consequences of the vector system on the fate of the transduced cell. Furthermore, the downstream performance of real-time PCR allows a quantitative estimation of the clonality of individual cells and their clonal progeny. Here, we present a guideline that should allow researchers to perform comprehensive integration site analysis in preclinical and clinical studies. Copyright © 2012 Elsevier Inc. All rights reserved.
Gillespie, Joseph J.; Wattam, Alice R.; Cammer, Stephen A.; Gabbard, Joseph L.; Shukla, Maulik P.; Dalay, Oral; Driscoll, Timothy; Hix, Deborah; Mane, Shrinivasrao P.; Mao, Chunhong; Nordberg, Eric K.; Scott, Mark; Schulman, Julie R.; Snyder, Eric E.; Sullivan, Daniel E.; Wang, Chunxia; Warren, Andrew; Williams, Kelly P.; Xue, Tian; Seung Yoo, Hyun; Zhang, Chengdong; Zhang, Yan; Will, Rebecca; Kenyon, Ronald W.; Sobral, Bruno W.
2011-01-01
Funded by the National Institute of Allergy and Infectious Diseases, the Pathosystems Resource Integration Center (PATRIC) is a genomics-centric relational database and bioinformatics resource designed to assist scientists in infectious-disease research. Specifically, PATRIC provides scientists with (i) a comprehensive bacterial genomics database, (ii) a plethora of associated data relevant to genomic analysis, and (iii) an extensive suite of computational tools and platforms for bioinformatics analysis. While the primary aim of PATRIC is to advance the knowledge underlying the biology of human pathogens, all publicly available genome-scale data for bacteria are compiled and continually updated, thereby enabling comparative analyses to reveal the basis for differences between infectious free-living and commensal species. Herein we summarize the major features available at PATRIC, dividing the resources into two major categories: (i) organisms, genomes, and comparative genomics and (ii) recurrent integration of community-derived associated data. Additionally, we present two experimental designs typical of bacterial genomics research and report on the execution of both projects using only PATRIC data and tools. These applications encompass a broad range of the data and analysis tools available, illustrating practical uses of PATRIC for the biologist. Finally, a summary of PATRIC's outreach activities, collaborative endeavors, and future research directions is provided. PMID:21896772
Cognitive task analysis of network analysts and managers for network situational awareness
NASA Astrophysics Data System (ADS)
Erbacher, Robert F.; Frincke, Deborah A.; Wong, Pak Chung; Moody, Sarah; Fink, Glenn
2010-01-01
The goal of our project is to create a set of next-generation cyber situational-awareness capabilities with applications to other domains in the long term. The situational-awareness capabilities being developed focus on novel visualization techniques as well as data analysis techniques designed to improve the comprehensibility of the visualizations. The objective is to improve the decision-making process to enable decision makers to choose better actions. To this end, we put extensive effort into ensuring we had feedback from network analysts and managers and understanding what their needs truly are. This paper discusses the cognitive task analysis methodology we followed to acquire feedback from the analysts. This paper also provides the details we acquired from the analysts on their processes, goals, concerns, etc. A final result we describe is the generation of a task-flow diagram.
Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.
Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz
2017-03-01
Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
ERIC Educational Resources Information Center
Morrison, Sharon D.; Sudha, S.; Herrera, Samantha; Ruiz, Carolina; Thomas, Emma
2016-01-01
Objective: Comprehensive information on the facilitators of HIV testing in Latino women (Latinas) in the Southeastern USA is lacking. Efforts to rectify this should include Latina perspectives on the issue. This study aimed to (1) solicit Latina perspectives using qualitative methodology and (2) characterise enablers of HIV testing follow-through.…
Bobb, Morgan R.; Van Heukelom, Paul G.; Faine, Brett A.; Ahmed, Azeemuddin; Messerly, Jeffrey T.; Bell, Gregory; Harland, Karisa K.; Simon, Christian; Mohr, Nicholas M.
2016-01-01
Objective Telemedicine networks are beginning to provide an avenue for conducting emergency medicine research, but using telemedicine to recruit participants for clinical trials has not been validated. The goal of this consent study is to determine whether patient comprehension of telemedicine-enabled research informed consent is non-inferior to standard face-to-face research informed consent. Methods A prospective, open-label randomized controlled trial was performed in a 60,000-visit Midwestern academic Emergency Department (ED) to test whether telemedicine-enabled research informed consent provided non-inferior comprehension compared with standard consent. This study was conducted as part of a parent clinical trial evaluating the effectiveness of oral chlorhexidine gluconate 0.12% in preventing hospital-acquired pneumonia among adult ED patients with expected hospital admission. Prior to being recruited into the study, potential participants were randomized in a 1:1 allocation ratio to consent by telemedicine versus standard face-to-face consent. Telemedicine connectivity was provided using a commercially available interface (REACH platform, Vidyo Inc., Hackensack, NJ) to an emergency physician located in another part of the ED. Comprehension of research consent (primary outcome) was measured using the modified Quality of Informed Consent (QuIC) instrument, a validated tool for measuring research informed consent comprehension. Parent trial accrual rate and qualitative survey data were secondary outcomes. Results One-hundred thirty-one patients were randomized (n = 64, telemedicine), and 101 QuIC surveys were completed. Comprehension of research informed consent using telemedicine was not inferior to face-to-face consent (QuIC scores 74.4 ± 8.1 vs. 74.4 ± 6.9 on a 100-point scale, p = 0.999). Subjective understanding of consent (p=0.194) and parent trial study accrual rates (56% vs. 69%, p = 0.142) were similar. Conclusion Telemedicine is non-inferior to face-to-face consent for delivering research informed consent, with no detected differences in comprehension and patient-reported understanding. This consent study will inform design of future telemedicine-enabled clinical trials. PMID:26990899
MannDB: A microbial annotation database for protein characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C; Lam, M; Smith, J
2006-05-19
MannDB was created to meet a need for rapid, comprehensive automated protein sequence analyses to support selection of proteins suitable as targets for driving the development of reagents for pathogen or protein toxin detection. Because a large number of open-source tools were needed, it was necessary to produce a software system to scale the computations for whole-proteome analysis. Thus, we built a fully automated system for executing software tools and for storage, integration, and display of automated protein sequence analysis and annotation data. MannDB is a relational database that organizes data resulting from fully automated, high-throughput protein-sequence analyses using open-sourcemore » tools. Types of analyses provided include predictions of cleavage, chemical properties, classification, features, functional assignment, post-translational modifications, motifs, antigenicity, and secondary structure. Proteomes (lists of hypothetical and known proteins) are downloaded and parsed from Genbank and then inserted into MannDB, and annotations from SwissProt are downloaded when identifiers are found in the Genbank entry or when identical sequences are identified. Currently 36 open-source tools are run against MannDB protein sequences either on local systems or by means of batch submission to external servers. In addition, BLAST against protein entries in MvirDB, our database of microbial virulence factors, is performed. A web client browser enables viewing of computational results and downloaded annotations, and a query tool enables structured and free-text search capabilities. When available, links to external databases, including MvirDB, are provided. MannDB contains whole-proteome analyses for at least one representative organism from each category of biological threat organism listed by APHIS, CDC, HHS, NIAID, USDA, USFDA, and WHO. MannDB comprises a large number of genomes and comprehensive protein sequence analyses representing organisms listed as high-priority agents on the websites of several governmental organizations concerned with bio-terrorism. MannDB provides the user with a BLAST interface for comparison of native and non-native sequences and a query tool for conveniently selecting proteins of interest. In addition, the user has access to a web-based browser that compiles comprehensive and extensive reports.« less
A theoretical treatment of technical risk in modern propulsion system design
NASA Astrophysics Data System (ADS)
Roth, Bryce Alexander
2000-09-01
A prevalent trend in modern aerospace systems is increasing complexity and cost, which in turn drives increased risk. Consequently, there is a clear and present need for the development of formalized methods to analyze the impact of risk on the design of aerospace vehicles. The objective of this work is to develop such a method that enables analysis of risk via a consistent, comprehensive treatment of aerothermodynamic and mass properties aspects of vehicle design. The key elements enabling the creation of this methodology are recent developments in the analytical estimation of work potential based on the second law of thermodynamics. This dissertation develops the theoretical foundation of a vehicle analysis method based on work potential and validates it using the Northrop F-5E with GE J85-GE-21 engines as a case study. Although the method is broadly applicable, emphasis is given to aircraft propulsion applications. Three work potential figures of merit are applied using this method: exergy, available energy, and thrust work potential. It is shown that each possesses unique properties making them useful for specific vehicle analysis tasks, though the latter two are actually special cases of exergy. All three are demonstrated on the analysis of the J85-GE-21 propulsion system, resulting in a comprehensive description of propulsion system thermodynamic loss. This "loss management" method is used to analyze aerodynamic drag loss of the F-5E and is then used in conjunction with the propulsive loss model to analyze the usage of fuel work potential throughout the F-5E design mission. The results clearly show how and where work potential is used during flight and yield considerable insight as to where the greatest opportunity for design improvement is. Next, usage of work potential is translated into fuel weight so that the aerothermodynamic performance of the F-5E can be expressed entirely in terms of vehicle gross weight. This technique is then applied as a means to quantify the impact of engine cycle technologies on the F-5E airframe. Finally, loss management methods are used in conjunction with probabilistic analysis methods to quantify the impact of risk on F-5E aerothermodynamic performance.
Santamaria, Monica; Fosso, Bruno; Licciulli, Flavio; Balech, Bachir; Larini, Ilaria; Grillo, Giorgio; De Caro, Giorgio; Liuni, Sabino; Pesole, Graziano
2018-01-04
A holistic understanding of environmental communities is the new challenge of metagenomics. Accordingly, the amplicon-based or metabarcoding approach, largely applied to investigate bacterial microbiomes, is moving to the eukaryotic world too. Indeed, the analysis of metabarcoding data may provide a comprehensive assessment of both bacterial and eukaryotic composition in a variety of environments, including human body. In this respect, whereas hypervariable regions of the 16S rRNA are the de facto standard barcode for bacteria, the Internal Transcribed Spacer 1 (ITS1) of ribosomal RNA gene cluster has shown a high potential in discriminating eukaryotes at deep taxonomic levels. As metabarcoding data analysis rely on the availability of a well-curated barcode reference resource, a comprehensive collection of ITS1 sequences supplied with robust taxonomies, is highly needed. To address this issue, we created ITSoneDB (available at http://itsonedb.cloud.ba.infn.it/) which in its current version hosts 985 240 ITS1 sequences spanning over 134 000 eukaryotic species. Each ITS1 is mapped on the NCBI reference taxonomy with its start and end positions precisely annotated. ITSoneDB has been developed in agreement to the FAIR guidelines by enabling the users to query and download its content through a simple web-interface and access relevant metadata by cross-linking to European Nucleotide Archive. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Baglai, Anna; Gargano, Andrea F G; Jordens, Jan; Mengerink, Ynze; Honing, Maarten; van der Wal, Sjoerd; Schoenmakers, Peter J
2017-12-29
Recent advancements in separation science have resulted in the commercialization of multidimensional separation systems that provide higher peak capacities and, hence, enable a more-detailed characterization of complex mixtures. In particular, two powerful analytical tools are increasingly used by analytical scientists, namely online comprehensive two-dimensional liquid chromatography (LC×LC, having a second-dimension separation in the liquid phase) and liquid chromatography-ion mobility-spectrometry (LC-IMS, second dimension separation in the gas phase). The goal of the current study was a general assessment of the liquid-chromatography-trapped-ion-mobility-mass spectrometry (LC-TIMS-MS) and comprehensive two-dimensional liquid chromatography-mass spectrometry (LC×LC-MS) platforms for untargeted lipid mapping in human plasma. For the first time trapped-ion-mobility spectrometry (TIMS) was employed for the separation of the major lipid classes and ion-mobility-derived collision-cross-section values were determined for a number of lipid standards. The general effects of a number of influencing parameters have been inspected and possible directions for improvements are discussed. We aimed to provide a general indication and practical guidelines for the analyst to choose an efficient multidimensional separation platform according to the particular requirements of the application. Analysis time, orthogonality, peak capacity, and an indicative measure for the resolving power are discussed as main characteristics for multidimensional separation systems. Copyright © 2017 Elsevier B.V. All rights reserved.
A Comprehensive Analysis of Alternative Splicing in Paleopolyploid Maize.
Mei, Wenbin; Liu, Sanzhen; Schnable, James C; Yeh, Cheng-Ting; Springer, Nathan M; Schnable, Patrick S; Barbazuk, William B
2017-01-01
Identifying and characterizing alternative splicing (AS) enables our understanding of the biological role of transcript isoform diversity. This study describes the use of publicly available RNA-Seq data to identify and characterize the global diversity of AS isoforms in maize using the inbred lines B73 and Mo17, and a related species, sorghum. Identification and characterization of AS within maize tissues revealed that genes expressed in seed exhibit the largest differential AS relative to other tissues examined. Additionally, differences in AS between the two genotypes B73 and Mo17 are greatest within genes expressed in seed. We demonstrate that changes in the level of alternatively spliced transcripts (intron retention and exon skipping) do not solely reflect differences in total transcript abundance, and we present evidence that intron retention may act to fine-tune gene expression across seed development stages. Furthermore, we have identified temperature sensitive AS in maize and demonstrate that drought-induced changes in AS involve distinct sets of genes in reproductive and vegetative tissues. Examining our identified AS isoforms within B73 × Mo17 recombinant inbred lines (RILs) identified splicing QTL (sQTL). The 43.3% of cis- sQTL regulated junctions are actually identified as alternatively spliced junctions in our analysis, while 10 Mb windows on each side of 48.2% of trans -sQTLs overlap with splicing related genes. Using sorghum as an out-group enabled direct examination of loss or conservation of AS between homeologous genes representing the two subgenomes of maize. We identify several instances where AS isoforms that are conserved between one maize homeolog and its sorghum ortholog are absent from the second maize homeolog, suggesting that these AS isoforms may have been lost after the maize whole genome duplication event. This comprehensive analysis provides new insights into the complexity of AS in maize.
Simplify and Accelerate Earth Science Data Preparation to Systemize Machine Learning
NASA Astrophysics Data System (ADS)
Kuo, K. S.; Rilee, M. L.; Oloso, A.
2017-12-01
Data preparation is the most laborious and time-consuming part of machine learning. The effort required is usually more than linearly proportional to the varieties of data used. From a system science viewpoint, useful machine learning in Earth Science likely involves diverse datasets. Thus, simplifying data preparation to ease the systemization of machine learning in Earth Science is of immense value. The technologies we have developed and applied to an array database, SciDB, are explicitly designed for the purpose, including the innovative SpatioTemporal Adaptive-Resolution Encoding (STARE), a remapping tool suite, and an efficient implementation of connected component labeling (CCL). STARE serves as a universal Earth data representation that homogenizes data varieties and facilitates spatiotemporal data placement as well as alignment, to maximize query performance on massively parallel, distributed computing resources for a major class of analysis. Moreover, it converts spatiotemporal set operations into fast and efficient integer interval operations, supporting in turn moving-object analysis. Integrative analysis requires more than overlapping spatiotemporal sets. For example, meaningful comparison of temperature fields obtained with different means and resolutions requires their transformation to the same grid. Therefore, remapping has been implemented to enable integrative analysis. Finally, Earth Science investigations are generally studies of phenomena, e.g. tropical cyclone, atmospheric river, and blizzard, through their associated events, like hurricanes Katrina and Sandy. Unfortunately, except for a few high-impact phenomena, comprehensive episodic records are lacking. Consequently, we have implemented an efficient CCL tracking algorithm, enabling event-based investigations within climate data records beyond mere event presence. In summary, we have implemented the core unifying capabilities on a Big Data technology to enable systematic machine learning in Earth Science.
LiPD and CSciBox: A Case Study in Why Data Standards are Important for Paleoscience
NASA Astrophysics Data System (ADS)
Weiss, I.; Bradley, E.; McKay, N.; Emile-Geay, J.; de Vesine, L. R.; Anderson, K. A.; White, J. W. C.; Marchitto, T. M., Jr.
2016-12-01
CSciBox [1] is an integrated software system that helps geoscientists build and evaluate age models. Its user chooses from a number of built-in analysis tools, composing them into an analysis workflow and applying it to paleoclimate proxy datasets. CSciBox employs modern database technology to store both the data and the analysis results in an easily accessible and searchable form, and offers the user access to the computational toolbox, the data, and the results via a graphical user interface and a sophisticated plotter. Standards are a staple of modern life, and underlie any form of automation. Without data standards, it is difficult, if not impossible, to construct effective computer tools for paleoscience analysis. The LiPD (Linked Paleo Data) framework [2] enables the storage of both data and metadata in systematic, meaningful, machine-readable ways. LiPD has been a primary enabler of CSciBox's goals of usability, interoperability, and reproducibility. Building LiPD capabilities into CSciBox's importer, for instance, eliminated the need to ask the user about file formats, variable names, relationships between columns in the input file, etc. Building LiPD capabilities into the exporter facilitated the storage of complete details about the input data-provenance, preprocessing steps, etc.-as well as full descriptions of any analyses that were performed using the CSciBox tool, along with citations to appropriate references. This comprehensive collection of data and metadata, which is all linked together in a semantically meaningful, machine-readable way, not only completely documents the analyses and makes them reproducible. It also enables interoperability with any other software system that employs the LiPD standard. [1] www.cs.colorado.edu/ lizb/cscience.html[2] McKay & Emile-Geay, Climate of the Past 12:1093 (2016)
Tameem, Hussain Z.; Sinha, Usha S.
2011-01-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520
NASA Astrophysics Data System (ADS)
Tameem, Hussain Z.; Sinha, Usha S.
2007-11-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
Pang, Xiao-Na; Li, Zhao-Jie; Chen, Jing-Yu; Gao, Li-Juan; Han, Bei-Zhong
2017-03-01
Standards and regulations related to spirit drinks have been established by different countries and international organizations to ensure the safety and quality of spirits. Here, we introduce the principles of food safety and quality standards for alcoholic beverages and then compare the key indicators used in the distinct standards of the Codex Alimentarius Commission, the European Union, the People's Republic of China, the United States, Canada, and Australia. We also discuss in detail the "maximum level" of the following main contaminants of spirit drinks: methanol, higher alcohols, ethyl carbamate, hydrocyanic acid, heavy metals, mycotoxins, phthalates, and aldehydes. Furthermore, the control measures used for potential hazards are introduced. Harmonization of the current requirements based on comprehensive scope analysis and the risk assessment approach will enhance both the trade and quality of distilled spirits. This review article provides valuable information that will enable producers, traders, governments, and researchers to increase their knowledge of spirit drink safety requirements, control measures, and research trends.
Simpson, Mary Jane; Doughty, Benjamin; Das, Sanjib; Xiao, Kai; Ma, Ying-Zhong
2017-07-20
A comprehensive understanding of electronic excited-state phenomena underlying the impressive performance of solution-processed hybrid halide perovskite solar cells requires access to both spatially resolved electronic processes and corresponding sample morphological characteristics. Here, we demonstrate an all-optical multimodal imaging approach that enables us to obtain both electronic excited-state and morphological information on a single optical microscope platform with simultaneous high temporal and spatial resolution. Specifically, images were acquired for the same region of interest in thin films of chloride containing mixed lead halide perovskites (CH 3 NH 3 PbI 3-x Cl x ) using femtosecond transient absorption, time-integrated photoluminescence, confocal reflectance, and transmission microscopies. Comprehensive image analysis revealed the presence of surface- and bulk-dominated contributions to the various images, which describe either spatially dependent electronic excited-state properties or morphological variations across the probed region of the thin films. These results show that PL probes effectively the species near or at the film surface.
Network Intrusion Detection and Visualization using Aggregations in a Cyber Security Data Warehouse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czejdo, Bogdan; Ferragut, Erik M; Goodall, John R
2012-01-01
The challenge of achieving situational understanding is a limiting factor in effective, timely, and adaptive cyber-security analysis. Anomaly detection fills a critical role in network assessment and trend analysis, both of which underlie the establishment of comprehensive situational understanding. To that end, we propose a cyber security data warehouse implemented as a hierarchical graph of aggregations that captures anomalies at multiple scales. Each node of our pro-posed graph is a summarization table of cyber event aggregations, and the edges are aggregation operators. The cyber security data warehouse enables domain experts to quickly traverse a multi-scale aggregation space systematically. We describemore » the architecture of a test bed system and a summary of results on the IEEE VAST 2012 Cyber Forensics data.« less
A Comprehensive Toolset for General-Purpose Private Computing and Outsourcing
2016-12-08
project and scientific advances made towards each of the research thrusts throughout the project duration. 1 Project Objectives Cloud computing enables...possibilities that the cloud enables is computation outsourcing, when the client can utilize any necessary computing resources for its computational task...Security considerations, however, stand on the way of harnessing the full benefits of cloud computing to the fullest extent and prevent clients from
Real time gamma-ray signature identifier
Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA
2012-05-15
A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences.
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant's platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses.
Fu, Guifang; Wan, Nicholas J A; Baker, Joseph M; Montgomery, James W; Evans, Julia L; Gillam, Ronald B
2016-01-01
Functional near infrared spectroscopy (fNIRS) is a neuroimaging technology that enables investigators to indirectly monitor brain activity in vivo through relative changes in the concentration of oxygenated and deoxygenated hemoglobin. One of the key features of fNIRS is its superior temporal resolution, with dense measurements over very short periods of time (100 ms increments). Unfortunately, most statistical analysis approaches in the existing literature have not fully utilized the high temporal resolution of fNIRS. For example, many analysis procedures are based on linearity assumptions that only extract partial information, thereby neglecting the overall dynamic trends in fNIRS trajectories. The main goal of this article is to assess the ability of a functional data analysis (FDA) approach for detecting significant differences in hemodynamic responses recorded by fNIRS. Children with and without SLI wore two, 3 × 5 fNIRS caps situated over the bilateral parasylvian areas as they completed a language comprehension task. FDA was used to decompose the high dimensional hemodynamic curves into the mean function and a few eigenfunctions to represent the overall trend and variation structures over time. Compared to the most popular GLM, we did not assume any parametric structure and let the data speak for itself. This analysis identified significant differences between the case and control groups in the oxygenated hemodynamic mean trends in the bilateral inferior frontal and left inferior posterior parietal brain regions. We also detected significant group differences in the deoxygenated hemodynamic mean trends in the right inferior posterior parietal cortex and left temporal parietal junction. These findings, using dramatically different approaches, experimental designs, data sets, and foci, were consistent with several other reports, confirming group differences in the importance of these two areas for syntax comprehension. The proposed FDA was consistent with the temporal characteristics of fNIRS, thus providing an alternative methodology for fNIRS analyses.
Unal, Emre; Idilman, Ilkay Sedakat; Karçaaltıncaba, Muşturay
2017-02-01
New advances in liver magnetic resonance imaging (MRI) may enable diagnosis of unseen pathologies by conventional techniques. Normal T1 (550-620 ms for 1.5 T and 700-850 ms for 3 T), T2, T2* (>20 ms), T1rho (40-50 ms) mapping, proton density fat fraction (PDFF) (≤5%) and stiffness (2-3kPa) values can enable differentiation of a normal liver from chronic liver and diffuse diseases. Gd-EOB-DTPA can enable assessment of liver function by using postcontrast hepatobiliary phase or T1 reduction rate (normally above 60%). T1 mapping can be important for the assessment of fibrosis, amyloidosis and copper overload. T1rho mapping is promising for the assessment of liver collagen deposition. PDFF can allow objective treatment assessment in NAFLD and NASH patients. T2 and T2* are used for iron overload determination. MR fingerprinting may enable single slice acquisition and easy implementation of multiparametric MRI and follow-up of patients. Areas covered: T1, T2, T2*, PDFF and stiffness, diffusion weighted imaging, intravoxel incoherent motion imaging (ADC, D, D* and f values) and function analysis are reviewed. Expert commentary: Multiparametric MRI can enable biopsyless diagnosis and more objective staging of diffuse liver disease, cirrhosis and predisposing diseases. A comprehensive approach is needed to understand and overcome the effects of iron, fat, fibrosis, edema, inflammation and copper on MR relaxometry values in diffuse liver disease.
Next-generation phenomics for the Tree of Life.
Burleigh, J Gordon; Alphonse, Kenzley; Alverson, Andrew J; Bik, Holly M; Blank, Carrine; Cirranello, Andrea L; Cui, Hong; Daly, Marymegan; Dietterich, Thomas G; Gasparich, Gail; Irvine, Jed; Julius, Matthew; Kaufman, Seth; Law, Edith; Liu, Jing; Moore, Lisa; O'Leary, Maureen A; Passarotti, Maria; Ranade, Sonali; Simmons, Nancy B; Stevenson, Dennis W; Thacker, Robert W; Theriot, Edward C; Todorovic, Sinisa; Velazco, Paúl M; Walls, Ramona L; Wolfe, Joanna M; Yu, Mengjie
2013-06-26
The phenotype represents a critical interface between the genome and the environment in which organisms live and evolve. Phenotypic characters also are a rich source of biodiversity data for tree building, and they enable scientists to reconstruct the evolutionary history of organisms, including most fossil taxa, for which genetic data are unavailable. Therefore, phenotypic data are necessary for building a comprehensive Tree of Life. In contrast to recent advances in molecular sequencing, which has become faster and cheaper through recent technological advances, phenotypic data collection remains often prohibitively slow and expensive. The next-generation phenomics project is a collaborative, multidisciplinary effort to leverage advances in image analysis, crowdsourcing, and natural language processing to develop and implement novel approaches for discovering and scoring the phenome, the collection of phentotypic characters for a species. This research represents a new approach to data collection that has the potential to transform phylogenetics research and to enable rapid advances in constructing the Tree of Life. Our goal is to assemble large phenomic datasets built using new methods and to provide the public and scientific community with tools for phenomic data assembly that will enable rapid and automated study of phenotypes across the Tree of Life.
Support for comprehensive reuse
NASA Technical Reports Server (NTRS)
Basili, V. R.; Rombach, H. D.
1991-01-01
Reuse of products, processes, and other knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demands. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows comprehensive reuse of all kinds of software-related experience could provide the means to achieving the desired order-of-magnitude improvements. A comprehensive framework of models, model-based characterization schemes, and support mechanisms for better understanding, evaluating, planning, and supporting all aspects of reuse are introduced.
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
Review of Software Tools for Design and Analysis of Large scale MRM Proteomic Datasets
Colangelo, Christopher M.; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi
2013-01-01
Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. PMID:23702368
Guidelines for the functional annotation of microRNAs using the Gene Ontology
D'Eustachio, Peter; Smith, Jennifer R.; Zampetaki, Anna
2016-01-01
MicroRNA regulation of developmental and cellular processes is a relatively new field of study, and the available research data have not been organized to enable its inclusion in pathway and network analysis tools. The association of gene products with terms from the Gene Ontology is an effective method to analyze functional data, but until recently there has been no substantial effort dedicated to applying Gene Ontology terms to microRNAs. Consequently, when performing functional analysis of microRNA data sets, researchers have had to rely instead on the functional annotations associated with the genes encoding microRNA targets. In consultation with experts in the field of microRNA research, we have created comprehensive recommendations for the Gene Ontology curation of microRNAs. This curation manual will enable provision of a high-quality, reliable set of functional annotations for the advancement of microRNA research. Here we describe the key aspects of the work, including development of the Gene Ontology to represent this data, standards for describing the data, and guidelines to support curators making these annotations. The full microRNA curation guidelines are available on the GO Consortium wiki (http://wiki.geneontology.org/index.php/MicroRNA_GO_annotation_manual). PMID:26917558
Comparative Genomics and Host Resistance against Infectious Diseases
Qureshi, Salman T.; Skamene, Emil
1999-01-01
The large size and complexity of the human genome have limited the identification and functional characterization of components of the innate immune system that play a critical role in front-line defense against invading microorganisms. However, advances in genome analysis (including the development of comprehensive sets of informative genetic markers, improved physical mapping methods, and novel techniques for transcript identification) have reduced the obstacles to discovery of novel host resistance genes. Study of the genomic organization and content of widely divergent vertebrate species has shown a remarkable degree of evolutionary conservation and enables meaningful cross-species comparison and analysis of newly discovered genes. Application of comparative genomics to host resistance will rapidly expand our understanding of human immune defense by facilitating the translation of knowledge acquired through the study of model organisms. We review the rationale and resources for comparative genomic analysis and describe three examples of host resistance genes successfully identified by this approach. PMID:10081670
Methods, Tools and Current Perspectives in Proteogenomics *
Ruggles, Kelly V.; Krug, Karsten; Wang, Xiaojing; Clauser, Karl R.; Wang, Jing; Payne, Samuel H.; Fenyö, David; Zhang, Bing; Mani, D. R.
2017-01-01
With combined technological advancements in high-throughput next-generation sequencing and deep mass spectrometry-based proteomics, proteogenomics, i.e. the integrative analysis of proteomic and genomic data, has emerged as a new research field. Early efforts in the field were focused on improving protein identification using sample-specific genomic and transcriptomic sequencing data. More recently, integrative analysis of quantitative measurements from genomic and proteomic studies have identified novel insights into gene expression regulation, cell signaling, and disease. Many methods and tools have been developed or adapted to enable an array of integrative proteogenomic approaches and in this article, we systematically classify published methods and tools into four major categories, (1) Sequence-centric proteogenomics; (2) Analysis of proteogenomic relationships; (3) Integrative modeling of proteogenomic data; and (4) Data sharing and visualization. We provide a comprehensive review of methods and available tools in each category and highlight their typical applications. PMID:28456751
TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.
Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D
2018-05-08
Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.
Bhavnani, Suresh K.; Chen, Tianlong; Ayyaswamy, Archana; Visweswaran, Shyam; Bellala, Gowtham; Rohit, Divekar; Kevin E., Bassler
2017-01-01
A primary goal of precision medicine is to identify patient subgroups based on their characteristics (e.g., comorbidities or genes) with the goal of designing more targeted interventions. While network visualization methods such as Fruchterman-Reingold have been used to successfully identify such patient subgroups in small to medium sized data sets, they often fail to reveal comprehensible visual patterns in large and dense networks despite having significant clustering. We therefore developed an algorithm called ExplodeLayout, which exploits the existence of significant clusters in bipartite networks to automatically “explode” a traditional network layout with the goal of separating overlapping clusters, while at the same time preserving key network topological properties that are critical for the comprehension of patient subgroups. We demonstrate the utility of ExplodeLayout by visualizing a large dataset extracted from Medicare consisting of readmitted hip-fracture patients and their comorbidities, demonstrate its statistically significant improvement over a traditional layout algorithm, and discuss how the resulting network visualization enabled clinicians to infer mechanisms precipitating hospital readmission in specific patient subgroups. PMID:28815099
Upton, Rosie; Bell, Leonard; Guy, Colin; Caldwell, Paul; Estdale, Sian; Barran, Perdita E; Firth, David
2016-10-18
In the development of therapeutic antibodies and biosimilars, an appropriate biopharmaceutical CMC control strategy that connects critical quality attributes with mechanism of action should enable product assessment at an early stage of development in order to mitigate risk. Here we demonstrate a new analytical workflow using trastuzumab which comprises "middle-up" analysis using a combination of IdeS and the endoglycosidases EndoS and EndoS2 to comprehensively map the glycan content. Enzymatic cleavage between the two N-acetyl glucosamine residues of the chitobiose core of N-glycans significantly simplifies the oligosaccharide component enabling facile distinction of GlcNAc from GlcNAc with core fucose. This approach facilitates quantitative determination of total Fc-glycan core-afucosylation, which was in turn correlated with receptor binding affinity by surface plasmon resonance and in vitro ADCC potency with a cell based bioassay. The strategy also quantifies Fc-glycan occupancy and the relative contribution from high mannose glycans.
Exploring a Multiphysics Resolution Approach for Additive Manufacturing
NASA Astrophysics Data System (ADS)
Estupinan Donoso, Alvaro Antonio; Peters, Bernhard
2018-06-01
Metal additive manufacturing (AM) is a fast-evolving technology aiming to efficiently produce complex parts while saving resources. Worldwide, active research is being performed to solve the existing challenges of this growing technique. Constant computational advances have enabled multiscale and multiphysics numerical tools that complement the traditional physical experimentation. In this contribution, an advanced discrete-continuous concept is proposed to address the physical phenomena involved during laser powder bed fusion. The concept treats powder as discrete by the extended discrete element method, which predicts the thermodynamic state and phase change for each particle. The fluid surrounding is solved with multiphase computational fluid dynamics techniques to determine momentum, heat, gas and liquid transfer. Thus, results track the positions and thermochemical history of individual particles in conjunction with the prevailing fluid phases' temperature and composition. It is believed that this methodology can be employed to complement experimental research by analysis of the comprehensive results, which can be extracted from it to enable AM processes optimization for parts qualification.
Generating community-built tools for data sharing and analysis in environmental networks
Read, Jordan S.; Gries, Corinna; Read, Emily K.; Klug, Jennifer; Hanson, Paul C.; Hipsey, Matthew R.; Jennings, Eleanor; O'Reilley, Catherine; Winslow, Luke A.; Pierson, Don; McBride, Christopher G.; Hamilton, David
2016-01-01
Rapid data growth in many environmental sectors has necessitated tools to manage and analyze these data. The development of tools often lags behind the proliferation of data, however, which may slow exploratory opportunities and scientific progress. The Global Lake Ecological Observatory Network (GLEON) collaborative model supports an efficient and comprehensive data–analysis–insight life cycle, including implementations of data quality control checks, statistical calculations/derivations, models, and data visualizations. These tools are community-built and openly shared. We discuss the network structure that enables tool development and a culture of sharing, leading to optimized output from limited resources. Specifically, data sharing and a flat collaborative structure encourage the development of tools that enable scientific insights from these data. Here we provide a cross-section of scientific advances derived from global-scale analyses in GLEON. We document enhancements to science capabilities made possible by the development of analytical tools and highlight opportunities to expand this framework to benefit other environmental networks.
Athey, Brian D; Braxenthaler, Michael; Haas, Magali; Guo, Yike
2013-01-01
tranSMART is an emerging global open source public private partnership community developing a comprehensive informatics-based analysis and data-sharing cloud platform for clinical and translational research. The tranSMART consortium includes pharmaceutical and other companies, not-for-profits, academic entities, patient advocacy groups, and government stakeholders. The tranSMART value proposition relies on the concept that the global community of users, developers, and stakeholders are the best source of innovation for applications and for useful data. Continued development and use of the tranSMART platform will create a means to enable "pre-competitive" data sharing broadly, saving money and, potentially accelerating research translation to cures. Significant transformative effects of tranSMART includes 1) allowing for all its user community to benefit from experts globally, 2) capturing the best of innovation in analytic tools, 3) a growing 'big data' resource, 4) convergent standards, and 5) new informatics-enabled translational science in the pharma, academic, and not-for-profit sectors.
Radiative transport produced by oblique illumination of turbid media with collimated beams
NASA Astrophysics Data System (ADS)
Gardner, Adam R.; Kim, Arnold D.; Venugopalan, Vasan
2013-06-01
We examine the general problem of light transport initiated by oblique illumination of a turbid medium with a collimated beam. This situation has direct relevance to the analysis of cloudy atmospheres, terrestrial surfaces, soft condensed matter, and biological tissues. We introduce a solution approach to the equation of radiative transfer that governs this problem, and develop a comprehensive spherical harmonics expansion method utilizing Fourier decomposition (SHEFN). The SHEFN approach enables the solution of problems lacking azimuthal symmetry and provides both the spatial and directional dependence of the radiance. We also introduce the method of sequential-order smoothing that enables the calculation of accurate solutions from the results of two sequential low-order approximations. We apply the SHEFN approach to determine the spatial and angular dependence of both internal and boundary radiances from strongly and weakly scattering turbid media. These solutions are validated using more costly Monte Carlo simulations and reveal important insights regarding the evolution of the radiant field generated by oblique collimated beams spanning ballistic and diffusely scattering regimes.
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H.; Montesinos-López, José C.; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-01-01
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. PMID:28364037
Deport, Coralie; Ratel, Jérémy; Berdagué, Jean-Louis; Engel, Erwan
2006-05-26
The current work describes a new method, the comprehensive combinatory standard correction (CCSC), for the correction of instrumental signal drifts in GC-MS systems. The method consists in analyzing together with the products of interest a mixture of n selected internal standards, and in normalizing the peak area of each analyte by the sum of standard areas and then, select among the summation operator sigma(p = 1)(n)C(n)p possible sums, the sum that enables the best product discrimination. The CCSC method was compared with classical techniques of data pre-processing like internal normalization (IN) or single standard correction (SSC) on their ability to correct raw data from the main drifts occurring in a dynamic headspace-gas chromatography-mass spectrometry system. Three edible oils with closely similar compositions in volatile compounds were analysed using a device which performance was modulated by using new or used dynamic headspace traps and GC-columns, and by modifying the tuning of the mass spectrometer. According to one-way ANOVA, the CCSC method increased the number of analytes discriminating the products (31 after CCSC versus 25 with raw data or after IN and 26 after SSC). Moreover, CCSC enabled a satisfactory discrimination of the products irrespective of the drifts. In a factorial discriminant analysis, 100% of the samples (n = 121) were well-classified after CCSC versus 45% for raw data, 90 and 93%, respectively after IN and SSC.
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.
2018-01-01
The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language…
Creating an enabling environment for WR&R implementation.
Stathatou, P-M; Kampragou, E; Grigoropoulou, H; Assimacopoulos, D; Karavitis, C; Gironás, J
2017-09-01
Reclaimed water is receiving growing attention worldwide as an effective solution for alleviating the growing water scarcity in many areas. Despite the various benefits associated with reclaimed water, water recycling and reuse (WR&R) practices are not widely applied around the world. This is mostly due to complex and inadequate local legal and institutional frameworks and socio-economic structures, which pose barriers to wider WR&R implementation. An integrated approach is therefore needed while planning the implementation of WR&R schemes, considering all the potential barriers, and aiming to develop favourable conditions for enhancing reclaimed water use. This paper proposes a comprehensive methodology supporting the development of an enabling environment for WR&R implementation. The political, economic, social, technical, legal and institutional factors that may influence positively (drivers) or negatively (barriers) WR&R implementation in the regional water systems are identified, through the mapping of local stakeholder perceptions. The identified barriers are further analysed, following a Cross-Impact/System analysis, to recognize the most significant barriers inhibiting system transition, and to prioritize the enabling instruments and arrangements that are needed to boost WR&R implementation. The proposed methodology was applied in the Copiapó River Basin in Chile, which faces severe water scarcity. Through the analysis, it was observed that barriers outweigh drivers for the implementation of WR&R schemes in the Copiapó River Basin, while the key barriers which could be useful for policy formulation towards an enabling environment in the area concern the unclear legal framework regarding the ownership of treated wastewater, the lack of environmental policies focusing on pollution control, the limited integration of reclaimed water use in current land use and development policies, the limited public awareness on WR&R, and the limited availability of governmental funding sources for WR&R.
Tobias, Herbert J.; Zhang, Ying; Auchus, Richard J.; Brenna, J. Thomas
2011-01-01
We report the first demonstration of Comprehensive Two-dimensional Gas Chromatography Combustion Isotope Ratio Mass Spectrometry (GC×GCC-IRMS) for the analysis of urinary steroids to detect illicit synthetic testosterone use, of interest in sport doping. GC coupled to IRMS (GCC-IRMS) is currently used to measure the carbon isotope ratios (CIR, δ13C) of urinary steroids in anti-doping efforts; however, extensive cleanup of urine extracts is required prior to analysis to enable baseline separation of target steroids. With its greater separation capabilities, GC×GC has the potential to reduce sample preparation requirements and enable CIR analysis of minimally processed urine extracts. Challenges addressed include on-line reactors with minimized dimensions to retain narrow peaks shapes, baseline separation of peaks in some cases, and reconstruction of isotopic information from sliced steroid chromatographic peaks. Difficulties remaining include long-term robustness of on-line reactors and urine matrix effects that preclude baseline separation and isotopic analysis of low concentration and trace components. In this work, steroids were extracted, acetylated, and analyzed using a refined, home-built GC×GCC-IRMS system. 11-hydroxy-androsterone (11OHA) and 11-ketoetiocolanolone (11KE) were chosen as endogenous reference compounds (ERC) because of their satisfactory signal intensity, and their CIR was compared to target compounds (TC) androsterone (A) and etiocholanolone (E). Separately, a GC×GC-qMS system was used to measure testosterone (T)/EpiT concentration ratios. Urinary extracts of urine pooled from professional athletes, and urine from one individual that received testosterone gel (T-gel) and one individual that received testosterone injections (T-shot) were analyzed. The average precisions of δ13C and Δδ13C measurements were SD(δ13C) approximately ± 1‰ (n=11). The T-shot sample resulted in a positive for T use with a T/EpiT ratio of > 9 and CIR measurements of Δδ13C > 5‰, both fulfilling World Anti-Doping Agency criteria. These data show for the first time that synthetic steroid use is detectable by GC×GCC-IRMS without need for extensive urine cleanup. PMID:21846122
Measuring the Strategic Value of the Armed Forces Health Longitudinal Technology Application (AHLTA)
2008-01-01
Mission Centered Care IP2 Beneficiaries partner with us to improve health outcomes IP1 Evidence - based medicine is used to improve quality, safety, and...Battlefield” IP6 Comprehensive globally accessible health and business information enable medical surveillance, evidence - based medicine and effective...information enables medical surveillance, evidence - based medicine , and effective healthcare operations. 4 OASD (2007a). Measures for the MHS Strategic
PyPathway: Python Package for Biological Network Analysis and Visualization.
Xu, Yang; Luo, Xiao-Chun
2018-05-01
Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.
EVA Suit R and D for Performance Optimization
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2014-01-01
Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations
Kasper, Jürgen; Köpke, Sascha; Mühlhauser, Ingrid; Heesen, Christoph
2006-07-01
This study analysis the comprehension and emotional responses of people suffering from multiple sclerosis when provided with an evidence-based information module. It is a core module of a comprehensive decision aid about immunotherapy. The core module is designed to enable patients to process scientific uncertainty without adverse effects. It considers existing standards for risk communication and presentation of data. Using a mailing approach we investigated 169 patients with differing courses of disease in a before-after design. Items addressed the competence in processing relative and absolute risk information and patients' emotional response to the tool, comprising grade of familiarity with the information, understanding, relevance, emotional arousal, and certainty. Overall, numeracy improved (p < 0.001), although 99 of 169 patients did not complete the numeracy task correctly. Understanding depended on the relevance related to the course of disease. A moderate level of uncertainty was induced. No adverse emotional responses could be shown, neither in those who did comprehend the information, nor in those who did not develop numeracy skills. In conclusion, the tool supports people suffering from multiple sclerosis to process evidence-based medical information and scientific uncertainty without burdening them emotionally. This study is an example for the documentation of an important step in the development process of a complex intervention.
Blanco-Elorrieta, Esti; Pylkkänen, Liina
2016-01-13
For multilingual individuals, adaptive goal-directed behavior as enabled by cognitive control includes the management of two or more languages. This work used magnetoencephalography (MEG) to investigate the degree of neural overlap between language control and domain-general cognitive control both in action and perception. Highly proficient Arabic-English bilingual individuals participated in maximally parallel language-switching tasks in production and comprehension as well as in analogous tasks in which, instead of the used language, the semantic category of the comprehended/produced word changed. Our results indicated a clear dissociation of language control mechanisms in production versus comprehension. Language-switching in production recruited dorsolateral prefrontal regions bilaterally and, importantly, these regions were similarly recruited by category-switching. Conversely, effects of language-switching in comprehension were observed in the anterior cingulate cortex and were not shared by category-switching. These results suggest that bilingual individuals rely on adaptive language control strategies and that the neural involvement during language-switching could be extensively influenced by whether the switch is active (e.g., in production) or passive (e.g., in comprehension). In addition, these results support that humans require high-level cognitive control to switch languages in production, but the comprehension of language switches recruits a distinct neural circuitry. The use of MEG enabled us to obtain the first characterization of the spatiotemporal profile of these effects, establishing that switching processes begin ∼ 400 ms after stimulus presentation. This research addresses the neural mechanisms underlying multilingual individuals' ability to successfully manage two or more languages, critically targeting whether language control is uniform across linguistic domains (production and comprehension) and whether it is a subdomain of general cognitive control. The results showed that language production and comprehension rely on different networks: whereas language control in production recruited domain-general networks, the brain bases of switching during comprehension seemed language specific. Therefore, the crucial assumption of the bilingual advantage hypothesis, that there is a close relationship between language control and general cognitive control, seems to only hold during production. Copyright © 2016 the authors 0270-6474/16/360290-12$15.00/0.
Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.
Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X
2017-12-05
Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Wyborn, L. A.; Druken, K. A.; Richards, C. J.; Trenham, C. E.; Wang, J.
2016-12-01
The Australian National Computational Infrastructure (NCI) manages a large geospatial repository (10+ PBytes) of Earth systems, environmental, water management and geophysics research data, co-located with a petascale supercomputer and an integrated research cloud. NCI has applied the principles of the "Common Framework for Earth-Observation Data" (the Framework) to the organisation of these collections enabling a diverse range of researchers to explore different aspects of the data and, in particular, for seamless programmatic data analysis, both in-situ access and via data services. NCI provides access to the collections through the National Environmental Research Data Interoperability Platform (NERDIP) - a comprehensive and integrated data platform with both common and emerging services designed to enable data accessibility and citability. Applying the Framework across the range of datasets ensures that programmatic access, both in-situ and network methods, work as uniformly as possible for any dataset, using both APIs and data services. NCI has also created a comprehensive quality assurance framework to regularise compliance checks across the data, library APIs and data services, and to establish a comprehensive set of benchmarks to quantify both functionality and performance perspectives for the Framework. The quality assurance includes organisation of datasets through a data management plan, which anchors the data directory structure, version controls and data information services so that they are kept aligned with operational changes over time. Specific attention has been placed on the way data are packed inside the files. Our experience has shown that complying with standards such as CF and ACDD is still not enough to ensure that all data services or software packages correctly read the data. Further, data may not be optimally organised for the different access patterns, which causes poor performance of the CPUs and bandwidth utilisation. We will also discuss some gaps in the Framework that have emerged and our approach to resolving these.
A Single and Comprehensive Helios Data Archive
NASA Astrophysics Data System (ADS)
Salem, C. S.
2017-12-01
Helios 1 & 2 rank amoung the most important missions in Heliophysics, and the more-than 11 years of data returned by its spacecraft remain of paramount interests to researchers. Their unique trajectories which brought them closer to the Sun than any spaceccraft before or since, enabled their diverse suite of in-situ instruments to return measurements of unprecedented scientific richness. There is however no comprehensive public repository of all Helios in-situ data. Currently, most of the highest resolution data can be obtained from a variety of places, although highly processed and with very little documentation, especially on calibration. Analysis of this data set requires overcoming a number of technical and instrumental issues, knowledge and expertise of which is only possessed by the original PI's of the Helios experiments. We present here a work funded by NASA of aggregating, analyzing, evaluating, documenting and archiving the available Helios 1 and 2 in-situ data. This work at the UC Berkeley Space Sciences Laboratory is being undertaken in close collaboration with colleagues at the University of Koln, at the University of Kiel, at the Imperial College in London and at the Paris Observatory. A careful, detailed, analysis of the Helios fluxgate and search coil magnetic field data as well as plasma data has revealed numerous issues and problems with the available, processed, datasets, that we are still working to solve. We anticipate this comprehensive single archive of all Helios in-situ data, beyond its inherent scientific value, will also be an invaluable asset to the both the Solar Probe Plus and Solar Orbiter missions.
Hadjithomas, Michalis; Chen, I-Min Amy; Chu, Ken; Ratner, Anna; Palaniappan, Krishna; Szeto, Ernest; Huang, Jinghua; Reddy, T B K; Cimermančič, Peter; Fischbach, Michael A; Ivanova, Natalia N; Markowitz, Victor M; Kyrpides, Nikos C; Pati, Amrita
2015-07-14
In the discovery of secondary metabolites, analysis of sequence data is a promising exploration path that remains largely underutilized due to the lack of computational platforms that enable such a systematic approach on a large scale. In this work, we present IMG-ABC (https://img.jgi.doe.gov/abc), an atlas of biosynthetic gene clusters within the Integrated Microbial Genomes (IMG) system, which is aimed at harnessing the power of "big" genomic data for discovering small molecules. IMG-ABC relies on IMG's comprehensive integrated structural and functional genomic data for the analysis of biosynthetic gene clusters (BCs) and associated secondary metabolites (SMs). SMs and BCs serve as the two main classes of objects in IMG-ABC, each with a rich collection of attributes. A unique feature of IMG-ABC is the incorporation of both experimentally validated and computationally predicted BCs in genomes as well as metagenomes, thus identifying BCs in uncultured populations and rare taxa. We demonstrate the strength of IMG-ABC's focused integrated analysis tools in enabling the exploration of microbial secondary metabolism on a global scale, through the discovery of phenazine-producing clusters for the first time in Alphaproteobacteria. IMG-ABC strives to fill the long-existent void of resources for computational exploration of the secondary metabolism universe; its underlying scalable framework enables traversal of uncovered phylogenetic and chemical structure space, serving as a doorway to a new era in the discovery of novel molecules. IMG-ABC is the largest publicly available database of predicted and experimental biosynthetic gene clusters and the secondary metabolites they produce. The system also includes powerful search and analysis tools that are integrated with IMG's extensive genomic/metagenomic data and analysis tool kits. As new research on biosynthetic gene clusters and secondary metabolites is published and more genomes are sequenced, IMG-ABC will continue to expand, with the goal of becoming an essential component of any bioinformatic exploration of the secondary metabolism world. Copyright © 2015 Hadjithomas et al.
Tulla, Kiara A; Maker, Ajay V
2018-03-01
Predicting the biologic behavior of intraductal papillary mucinous neoplasm (IPMN) remains challenging. Current guidelines utilize patient symptoms and imaging characteristics to determine appropriate surgical candidates. However, the majority of resected cysts remain low-risk lesions, many of which may be feasible to have under surveillance. We herein characterize the most promising and up-to-date molecular diagnostics in order to identify optimal components of a molecular signature to distinguish levels of IPMN dysplasia. A comprehensive systematic review of pertinent literature, including our own experience, was conducted based on the PRISMA guidelines. Molecular diagnostics in IPMN patient tissue, duodenal secretions, cyst fluid, saliva, and serum were evaluated and organized into the following categories: oncogenes, tumor suppressor genes, glycoproteins, markers of the immune response, proteomics, DNA/RNA mutations, and next-generation sequencing/microRNA. Specific targets in each of these categories, and in aggregate, were identified by their ability to both characterize a cyst as an IPMN and determine the level of cyst dysplasia. Combining molecular signatures with clinical and imaging features in this era of next-generation sequencing and advanced computational analysis will enable enhanced sensitivity and specificity of current models to predict the biologic behavior of IPMN.
Pesavento, James J; Bullock, Courtney R; LeDuc, Richard D; Mizzen, Craig A; Kelleher, Neil L
2008-05-30
Quantitative proteomics has focused heavily on correlating protein abundances, ratios, and dynamics by developing methods that are protein expression-centric (e.g. isotope coded affinity tag, isobaric tag for relative and absolute quantification, etc.). These methods effectively detect changes in protein abundance but fail to provide a comprehensive perspective of the diversity of proteins such as histones, which are regulated by post-translational modifications. Here, we report the characterization of modified forms of HeLa cell histone H4 with a dynamic range >10(4) using a strictly Top Down mass spectrometric approach coupled with two dimensions of liquid chromatography. This enhanced dynamic range enabled the precise characterization and quantitation of 42 forms uniquely modified by combinations of methylation and acetylation, including those with trimethylated Lys-20, monomethylated Arg-3, and the novel dimethylated Arg-3 (each <1% of all H4 forms). Quantitative analyses revealed distinct trends in acetylation site occupancy depending on Lys-20 methylation state. Because both modifications are dynamically regulated through the cell cycle, we simultaneously investigated acetylation and methylation kinetics through three cell cycle phases and used these data to statistically assess the robustness of our quantitative analysis. This work represents the most comprehensive analysis of histone H4 forms present in human cells reported to date.
Enabling Professionalism: The Master Technician Program.
ERIC Educational Resources Information Center
Wimmer, Doris K.
1988-01-01
Describes Virginia's Master Technician Program, which offers a comprehensive coordinated curriculum in electronics/electromechanical technology that spans high school and community college levels of instruction. Highlights innovations of the project, curriculum design, advantages, and future projections. (DMM)
Recombinational Cloning Using Gateway and In-Fusion Cloning Schemes
Throop, Andrea L.; LaBaer, Joshua
2015-01-01
The comprehensive study of protein structure and function, or proteomics, depends on the obtainability of full-length cDNAs in species-specific expression vectors and subsequent functional analysis of the expressed protein. Recombinational cloning is a universal cloning technique based on site-specific recombination that is independent of the insert DNA sequence of interest, which differentiates this method from the classical restriction enzyme-based cloning methods. Recombinational cloning enables rapid and efficient parallel transfer of DNA inserts into multiple expression systems. This unit summarizes strategies for generating expression-ready clones using the most popular recombinational cloning technologies, including the commercially available Gateway® (Life Technologies) and In-Fusion® (Clontech) cloning technologies. PMID:25827088
Financial Brownian Particle in the Layered Order-Book Fluid and Fluctuation-Dissipation Relations
NASA Astrophysics Data System (ADS)
Yura, Yoshihiro; Takayasu, Hideki; Sornette, Didier; Takayasu, Misako
2014-03-01
We introduce a novel description of the dynamics of the order book of financial markets as that of an effective colloidal Brownian particle embedded in fluid particles. The analysis of comprehensive market data enables us to identify all motions of the fluid particles. Correlations between the motions of the Brownian particle and its surrounding fluid particles reflect specific layering interactions; in the inner layer the correlation is strong and with short memory, while in the outer layer it is weaker and with long memory. By interpreting and estimating the contribution from the outer layer as a drag resistance, we demonstrate the validity of the fluctuation-dissipation relation in this nonmaterial Brownian motion process.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant’s platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses. PMID:26752627
Financial Brownian particle in the layered order-book fluid and fluctuation-dissipation relations.
Yura, Yoshihiro; Takayasu, Hideki; Sornette, Didier; Takayasu, Misako
2014-03-07
We introduce a novel description of the dynamics of the order book of financial markets as that of an effective colloidal Brownian particle embedded in fluid particles. The analysis of comprehensive market data enables us to identify all motions of the fluid particles. Correlations between the motions of the Brownian particle and its surrounding fluid particles reflect specific layering interactions; in the inner layer the correlation is strong and with short memory, while in the outer layer it is weaker and with long memory. By interpreting and estimating the contribution from the outer layer as a drag resistance, we demonstrate the validity of the fluctuation-dissipation relation in this nonmaterial Brownian motion process.
Technology developments integrating a space network communications testbed
NASA Technical Reports Server (NTRS)
Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee
2006-01-01
As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enables its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions.
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.
2016-01-01
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718
Jorjani, Hadi; Zavolan, Mihaela
2014-04-01
Accurate identification of transcription start sites (TSSs) is an essential step in the analysis of transcription regulatory networks. In higher eukaryotes, the capped analysis of gene expression technology enabled comprehensive annotation of TSSs in genomes such as those of mice and humans. In bacteria, an equivalent approach, termed differential RNA sequencing (dRNA-seq), has recently been proposed, but the application of this approach to a large number of genomes is hindered by the paucity of computational analysis methods. With few exceptions, when the method has been used, annotation of TSSs has been largely done manually. In this work, we present a computational method called 'TSSer' that enables the automatic inference of TSSs from dRNA-seq data. The method rests on a probabilistic framework for identifying both genomic positions that are preferentially enriched in the dRNA-seq data as well as preferentially captured relative to neighboring genomic regions. Evaluating our approach for TSS calling on several publicly available datasets, we find that TSSer achieves high consistency with the curated lists of annotated TSSs, but identifies many additional TSSs. Therefore, TSSer can accelerate genome-wide identification of TSSs in bacterial genomes and can aid in further characterization of bacterial transcription regulatory networks. TSSer is freely available under GPL license at http://www.clipz.unibas.ch/TSSer/index.php
A new method for the assessment of the surface topography of NiTi rotary instruments.
Ferreira, F; Barbosa, I; Scelza, P; Russano, D; Neff, J; Montagnana, M; Zaccaro Scelza, M
2017-09-01
To describe a new method for the assessment of nanoscale alterations in the surface topography of nickel-titanium endodontic instruments using a high-resolution optical method and to verify the accuracy of the technique. Noncontact three-dimensional optical profilometry was used to evaluate defects on a size 25, .08 taper reciprocating instrument (WaveOne ® ), which was subjected to a cyclic fatigue test in a simulated root canal in a clear resin block. For the investigation, an original procedure was established for the analysis of similar areas located 3 mm from the tip of the instrument before and after canal preparation to enable the repeatability and reproducibility of the measurements with precision. All observations and analysis were taken in areas measuring 210 × 210 μm provided by the software of the equipment. The three-dimensional high-resolution image analysis showed clear alterations in the surface topography of the examined cutting blade and flute of the instrument, before and after use, with the presence of surface irregularities such as deformations, debris, grooves, cracks, steps and microcavities. Optical profilometry provided accurate qualitative nanoscale evaluation of similar surfaces before and after the fatigue test. The stability and repeatability of the technique enables a more comprehensive understanding of the effects of wear on the surface of endodontic instruments. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D
2016-07-25
In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.
Ontology-Driven Provenance Management in eScience: An Application in Parasite Research
NASA Astrophysics Data System (ADS)
Sahoo, Satya S.; Weatherly, D. Brent; Mutharaju, Raghava; Anantharam, Pramod; Sheth, Amit; Tarleton, Rick L.
Provenance, from the French word "provenir", describes the lineage or history of a data entity. Provenance is critical information in scientific applications to verify experiment process, validate data quality and associate trust values with scientific results. Current industrial scale eScience projects require an end-to-end provenance management infrastructure. This infrastructure needs to be underpinned by formal semantics to enable analysis of large scale provenance information by software applications. Further, effective analysis of provenance information requires well-defined query mechanisms to support complex queries over large datasets. This paper introduces an ontology-driven provenance management infrastructure for biology experiment data, as part of the Semantic Problem Solving Environment (SPSE) for Trypanosoma cruzi (T.cruzi). This provenance infrastructure, called T.cruzi Provenance Management System (PMS), is underpinned by (a) a domain-specific provenance ontology called Parasite Experiment ontology, (b) specialized query operators for provenance analysis, and (c) a provenance query engine. The query engine uses a novel optimization technique based on materialized views called materialized provenance views (MPV) to scale with increasing data size and query complexity. This comprehensive ontology-driven provenance infrastructure not only allows effective tracking and management of ongoing experiments in the Tarleton Research Group at the Center for Tropical and Emerging Global Diseases (CTEGD), but also enables researchers to retrieve the complete provenance information of scientific results for publication in literature.
Fast interactive exploration of 4D MRI flow data
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.
2011-03-01
1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
NASA Astrophysics Data System (ADS)
Lu, Hong; Gargesha, Madhusudhana; Wang, Zhao; Chamie, Daniel; Attizani, Guilherme F.; Kanaya, Tomoaki; Ray, Soumya; Costa, Marco A.; Rollins, Andrew M.; Bezerra, Hiram G.; Wilson, David L.
2013-02-01
Intravascular OCT (iOCT) is an imaging modality with ideal resolution and contrast to provide accurate in vivo assessments of tissue healing following stent implantation. Our Cardiovascular Imaging Core Laboratory has served >20 international stent clinical trials with >2000 stents analyzed. Each stent requires 6-16hrs of manual analysis time and we are developing highly automated software to reduce this extreme effort. Using classification technique, physically meaningful image features, forward feature selection to limit overtraining, and leave-one-stent-out cross validation, we detected stent struts. To determine tissue coverage areas, we estimated stent "contours" by fitting detected struts and interpolation points from linearly interpolated tissue depths to a periodic cubic spline. Tissue coverage area was obtained by subtracting lumen area from the stent area. Detection was compared against manual analysis of 40 pullbacks. We obtained recall = 90+/-3% and precision = 89+/-6%. When taking struts deemed not bright enough for manual analysis into consideration, precision improved to 94+/-6%. This approached inter-observer variability (recall = 93%, precision = 96%). Differences in stent and tissue coverage areas are 0.12 +/- 0.41 mm2 and 0.09 +/- 0.42 mm2, respectively. We are developing software which will enable visualization, review, and editing of automated results, so as to provide a comprehensive stent analysis package. This should enable better and cheaper stent clinical trials, so that manufacturers can optimize the myriad of parameters (drug, coverage, bioresorbable versus metal, etc.) for stent design.
Wei, Fang; Hu, Na; Lv, Xin; Dong, Xu-Yan; Chen, Hong
2015-07-24
In this investigation, off-line comprehensive two-dimensional liquid chromatography-atmospheric pressure chemical ionization mass spectrometry using a single column has been applied for the identification and quantification of triacylglycerols in edible oils. A novel mixed-mode phenyl-hexyl chromatographic column was employed in this off-line two-dimensional separation system. The phenyl-hexyl column combined the features of traditional C18 and silver-ion columns, which could provide hydrophobic interactions with triacylglycerols under acetonitrile conditions and can offer π-π interactions with triacylglycerols under methanol conditions. When compared with traditional off-line comprehensive two-dimensional liquid chromatography employing two different chromatographic columns (C18 and silver-ion column) and using elution solvents comprised of two phases (reversed-phase/normal-phase) for triacylglycerols separation, the novel off-line comprehensive two-dimensional liquid chromatography using a single column can be achieved by simply altering the mobile phase between acetonitrile and methanol, which exhibited a much higher selectivity for the separation of triacylglycerols with great efficiency and rapid speed. In addition, an approach based on the use of response factor with atmospheric pressure chemical ionization mass spectrometry has been developed for triacylglycerols quantification. Due to the differences between saturated and unsaturated acyl chains, the use of response factors significantly improves the quantitation of triacylglycerols. This two-dimensional liquid chromatography-mass spectrometry system was successfully applied for the profiling of triacylglycerols in soybean oils, peanut oils and lord oils. A total of 68 triacylglycerols including 40 triacylglycerols in soybean oils, 50 triacylglycerols in peanut oils and 44 triacylglycerols in lord oils have been identified and quantified. The liquid chromatography-mass spectrometry data were analyzed using principal component analysis. The results of the principal component analysis enabled a clear identification of different plant oils. By using this two-dimensional liquid chromatography-mass spectrometry system coupled with principal component analysis, adulterated soybean oils with 5% added lord oil and peanut oils with 5% added soybean oil can be clearly identified. Copyright © 2015 Elsevier B.V. All rights reserved.
Model regulations and plan amendments for multimodal transportation districts
DOT National Transportation Integrated Search
2004-02-01
In 1999, the Florida legislature enabled local governments to establish Multimodal Transportation Districts (MMTD) in their comprehensive plan as a means of promoting a high quality multimodal environment within selected urban areas. The Florida Depa...
MOPED enables discoveries through consistently processed proteomics data
Higdon, Roger; Stewart, Elizabeth; Stanberry, Larissa; Haynes, Winston; Choiniere, John; Montague, Elizabeth; Anderson, Nathaniel; Yandl, Gregory; Janko, Imre; Broomall, William; Fishilevich, Simon; Lancet, Doron; Kolker, Natali; Kolker, Eugene
2014-01-01
The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries. PMID:24350770
RFID in the blood supply chain--increasing productivity, quality and patient safety.
Briggs, Lynne; Davis, Rodeina; Gutierrez, Alfonso; Kopetsky, Matthew; Young, Kassandra; Veeramani, Raj
2009-01-01
As part of an overall design of a new, standardized RFID-enabled blood transfusion medicine supply chain, an assessment was conducted for two hospitals: the University of Iowa Hospital and Clinics (UIHC) and Mississippi Baptist Health System (MBHS). The main objectives of the study were to assess RFID technological and economic feasibility, along with possible impacts to productivity, quality and patient safety. A step-by-step process analysis focused on the factors contributing to process "pain points" (errors, inefficiency, product losses). A process re-engineering exercise produced blueprints of RFID-enabled processes to alleviate or eliminate those pain-points. In addition, an innovative model quantifying the potential reduction in adverse patient effects as a result of RFID implementation was created, allowing improvement initiatives to focus on process areas with the greatest potential impact to patient safety. The study concluded that it is feasible to implement RFID-enabled processes, with tangible improvements to productivity and safety expected. Based on a comprehensive cost/benefit model, it is estimated for a large hospital (UIHC) to recover investment from implementation within two to three years, while smaller hospitals may need longer to realize ROI. More importantly, the study estimated that RFID technology could reduce morbidity and mortality effects substantially among patients receiving transfusions.
ERIC Educational Resources Information Center
Duda, Richard
The immediate objective of this course in technical English was to enable French-speaking mechanics and technicians to read the instructions for the installation, operation and upkeep of American-made machinery. Although the learners knew very little English, available British and American technical documents were used because of their…
Evensen, Stig; Wisløff, Torbjørn; Lystad, June Ullevoldsæter; Bull, Helen; Ueland, Torill; Falkum, Erik
2016-01-01
Schizophrenia is associated with recurrent hospitalizations, need for long-term community support, poor social functioning, and low employment rates. Despite the wide- ranging financial and social burdens associated with the illness, there is great uncertainty regarding prevalence, employment rates, and the societal costs of schizophrenia. The current study investigates 12-month prevalence of patients treated for schizophrenia, employment rates, and cost of schizophrenia using a population-based top-down approach. Data were obtained from comprehensive and mandatory health and welfare registers in Norway. We identified a 12-month prevalence of 0.17% for the entire population. The employment rate among working-age individuals was 10.24%. The societal costs for the 12-month period were USD 890 million. The average cost per individual with schizophrenia was USD 106 thousand. Inpatient care and lost productivity due to high unemployment represented 33% and 29%, respectively, of the total costs. The use of mandatory health and welfare registers enabled a unique and informative analysis on true population-based datasets. PMID:26433216
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Dylan; Frank, Stephen; Slovensky, Michelle
Rich, well-organized building performance and energy consumption data enable a host of analytic capabilities for building owners and operators, from basic energy benchmarking to detailed fault detection and system optimization. Unfortunately, data integration for building control systems is challenging and costly in any setting. Large portfolios of buildings--campuses, cities, and corporate portfolios--experience these integration challenges most acutely. These large portfolios often have a wide array of control systems, including multiple vendors and nonstandard communication protocols. They typically have complex information technology (IT) networks and cybersecurity requirements and may integrate distributed energy resources into their infrastructure. Although the challenges are significant,more » the integration of control system data has the potential to provide proportionally greater value for these organizations through portfolio-scale analytics, comprehensive demand management, and asset performance visibility. As a large research campus, the National Renewable Energy Laboratory (NREL) experiences significant data integration challenges. To meet them, NREL has developed an architecture for effective data collection, integration, and analysis, providing a comprehensive view of data integration based on functional layers. The architecture is being evaluated on the NREL campus through deployment of three pilot implementations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael
The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less
Tipton, Christopher M; Hom, Jennifer R; Fucile, Christopher F; Rosenberg, Alexander F; Sanz, Inaki
2018-07-01
Understanding antibody repertoires and in particular, the properties and fates of B cells expressing potentially pathogenic antibodies is critical to define the mechanisms underlying multiple immunological diseases including autoimmune and allergic conditions as well as transplant rejection. Moreover, an integrated knowledge of the antibody repertoires expressed by B cells and plasma cells (PC) of different functional properties and longevity is essential to develop new therapeutic strategies, better biomarkers for disease segmentation, and new assays to measure restoration of B-cell tolerance or, at least, of normal B-cell homeostasis. Reaching these goals, however, will require a more precise phenotypic, functional and molecular definition of B-cell and PC populations, and a comprehensive analysis of the antigenic reactivity of the antibodies they express. While traditionally hampered by technical and ethical limitations in human experimentation, new technological advances currently enable investigators to address these questions in a comprehensive fashion. In this review, we shall discuss these concepts as they apply to the study of Systemic Lupus Erythematosus. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Simpson, Mary Jane; Doughty, Benjamin; Das, Sanjib; ...
2017-07-04
A comprehensive understanding of electronic excited-state phenomena underlying the impressive performance of solution-processed hybrid halide perovskite solar cells requires access to both spatially resolved electronic processes and corresponding sample morphological characteristics. In this paper, we demonstrate an all-optical multimodal imaging approach that enables us to obtain both electronic excited-state and morphological information on a single optical microscope platform with simultaneous high temporal and spatial resolution. Specifically, images were acquired for the same region of interest in thin films of chloride containing mixed lead halide perovskites (CH 3NH 3PbI 3–xCl x) using femtosecond transient absorption, time-integrated photoluminescence, confocal reflectance, and transmissionmore » microscopies. Comprehensive image analysis revealed the presence of surface- and bulk-dominated contributions to the various images, which describe either spatially dependent electronic excited-state properties or morphological variations across the probed region of the thin films. Finally, these results show that PL probes effectively the species near or at the film surface.« less
Large-scale gene function analysis with the PANTHER classification system.
Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D
2013-08-01
The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.
Review of software tools for design and analysis of large scale MRM proteomic datasets.
Colangelo, Christopher M; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi
2013-06-15
Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Abylkassimova, Nikara; Hugall, Andrew F.; O'Hara, Timothy D.; Elphick, Maurice R.
2017-01-01
Neuropeptides are a diverse class of intercellular signalling molecules that mediate neuronal regulation of many physiological and behavioural processes. Recent advances in genome/transcriptome sequencing are enabling identification of neuropeptide precursor proteins in species from a growing variety of animal taxa, providing new insights into the evolution of neuropeptide signalling. Here, detailed analysis of transcriptome sequence data from three brittle star species, Ophionotus victoriae, Amphiura filiformis and Ophiopsila aranea, has enabled the first comprehensive identification of neuropeptide precursors in the class Ophiuroidea of the phylum Echinodermata. Representatives of over 30 bilaterian neuropeptide precursor families were identified, some of which occur as paralogues. Furthermore, homologues of endothelin/CCHamide, eclosion hormone, neuropeptide-F/Y and nucleobinin/nesfatin were discovered here in a deuterostome/echinoderm for the first time. The majority of ophiuroid neuropeptide precursors contain a single copy of a neuropeptide, but several precursors comprise multiple copies of identical or non-identical, but structurally related, neuropeptides. Here, we performed an unprecedented investigation of the evolution of neuropeptide copy number over a period of approximately 270 Myr by analysing sequence data from over 50 ophiuroid species, with reference to a robust phylogeny. Our analysis indicates that the composition of neuropeptide ‘cocktails’ is functionally important, but with plasticity over long evolutionary time scales. PMID:28878039
Comprehensive UAV agricultural remote-sensing research at Texas A M University
NASA Astrophysics Data System (ADS)
Thomasson, J. Alex; Shi, Yeyin; Olsenholler, Jeffrey; Valasek, John; Murray, Seth C.; Bishop, Michael P.
2016-05-01
Unmanned aerial vehicles (UAVs) have advantages over manned vehicles for agricultural remote sensing. Flying UAVs is less expensive, is more flexible in scheduling, enables lower altitudes, uses lower speeds, and provides better spatial resolution for imaging. The main disadvantage is that, at lower altitudes and speeds, only small areas can be imaged. However, on large farms with contiguous fields, high-quality images can be collected regularly by using UAVs with appropriate sensing technologies that enable high-quality image mosaics to be created with sufficient metadata and ground-control points. In the United States, rules governing the use of aircraft are promulgated and enforced by the Federal Aviation Administration (FAA), and rules governing UAVs are currently in flux. Operators must apply for appropriate permissions to fly UAVs. In the summer of 2015 Texas A&M University's agricultural research agency, Texas A&M AgriLife Research, embarked on a comprehensive program of remote sensing with UAVs at its 568-ha Brazos Bottom Research Farm. This farm is made up of numerous fields where various crops are grown in plots or complete fields. The crops include cotton, corn, sorghum, and wheat. After gaining FAA permission to fly at the farm, the research team used multiple fixed-wing and rotary-wing UAVs along with various sensors to collect images over all parts of the farm at least once per week. This article reports on details of flight operations and sensing and analysis protocols, and it includes some lessons learned in the process of developing a UAV remote-sensing effort of this sort.
Raghunath, Arathi; Sambarey, Awanti; Sharma, Neha; Mahadevan, Usha; Chandra, Nagasuma
2015-04-29
Ultraviolet radiations (UV) serve as an environmental stress for human skin, and result in melanogenesis, with the pigment melanin having protective effects against UV induced damage. This involves a dynamic and complex regulation of various biological processes that results in the expression of melanin in the outer most layers of the epidermis, where it can exert its protective effect. A comprehensive understanding of the underlying cross talk among different signalling molecules and cell types is only possible through a systems perspective. Increasing incidences of both melanoma and non-melanoma skin cancers necessitate the need to better comprehend UV mediated effects on skin pigmentation at a systems level, so as to ultimately evolve knowledge-based strategies for efficient protection and prevention of skin diseases. A network model for UV-mediated skin pigmentation in the epidermis was constructed and subjected to shortest path analysis. Virtual knock-outs were carried out to identify essential signalling components. We describe a network model for UV-mediated skin pigmentation in the epidermis. The model consists of 265 components (nodes) and 429 directed interactions among them, capturing the manner in which one component influences the other and channels information. Through shortest path analysis, we identify novel signalling pathways relevant to pigmentation. Virtual knock-outs or perturbations of specific nodes in the network have led to the identification of alternate modes of signalling as well as enabled determining essential nodes in the process. The model presented provides a comprehensive picture of UV mediated signalling manifesting in human skin pigmentation. A systems perspective helps provide a holistic purview of interconnections and complexity in the processes leading to pigmentation. The model described here is extensive yet amenable to expansion as new data is gathered. Through this study, we provide a list of important proteins essential for pigmentation which can be further explored to better understand normal pigmentation as well as its pathologies including vitiligo and melanoma, and enable therapeutic intervention.
A Practical Guide to the Study of Sport Related Injuries.
ERIC Educational Resources Information Center
Alles, Wesley F.; And Others
1985-01-01
A comprehensive injury surveillance system enables local administrators to effectively target their countermeasures, thereby reducing injury risk and increasing the benefits of athletic competition and leisure activity. This article offers a guide to such a system. (MT)
45 CFR Appendix A to Part 96 - Uniform Definitions of Services
Code of Federal Regulations, 2010 CFR
2010-10-01
...-hour day. Component services or activities may include opportunity for social interaction... activities for children, recreation, meals and snacks, transportation, health support services, social... those educational, comprehensive medical or social services or activities which enable individuals...
Liu, Ping; Salvi, Ashwin
2018-01-16
With more than 250 conceptual designs submitted, we are pleased to highlight the winners of the LIghtweighting Technologies Enabling Comprehensive Automotive Redesign (LITECAR) Challenge. These innovative conceptual designs seek to lightweight a vehicle while maintaining or exceeding current U.S. automotive safety standards.
Bedford, Nicholas M; Hughes, Zak E; Tang, Zhenghua; Li, Yue; Briggs, Beverly D; Ren, Yang; Swihart, Mark T; Petkov, Valeri G; Naik, Rajesh R; Knecht, Marc R; Walsh, Tiffany R
2016-01-20
Peptide-enabled nanoparticle (NP) synthesis routes can create and/or assemble functional nanomaterials under environmentally friendly conditions, with properties dictated by complex interactions at the biotic/abiotic interface. Manipulation of this interface through sequence modification can provide the capability for material properties to be tailored to create enhanced materials for energy, catalysis, and sensing applications. Fully realizing the potential of these materials requires a comprehensive understanding of sequence-dependent structure/function relationships that is presently lacking. In this work, the atomic-scale structures of a series of peptide-capped Au NPs are determined using a combination of atomic pair distribution function analysis of high-energy X-ray diffraction data and advanced molecular dynamics (MD) simulations. The Au NPs produced with different peptide sequences exhibit varying degrees of catalytic activity for the exemplar reaction 4-nitrophenol reduction. The experimentally derived atomic-scale NP configurations reveal sequence-dependent differences in structural order at the NP surface. Replica exchange with solute-tempering MD simulations are then used to predict the morphology of the peptide overlayer on these Au NPs and identify factors determining the structure/catalytic properties relationship. We show that the amount of exposed Au surface, the underlying surface structural disorder, and the interaction strength of the peptide with the Au surface all influence catalytic performance. A simplified computational prediction of catalytic performance is developed that can potentially serve as a screening tool for future studies. Our approach provides a platform for broadening the analysis of catalytic peptide-enabled metallic NP systems, potentially allowing for the development of rational design rules for property enhancement.
Speciated Elemental and Isotopic Characterization of Atmospheric Aerosols - Recent Advances
NASA Astrophysics Data System (ADS)
Shafer, M.; Majestic, B.; Schauer, J.
2007-12-01
Detailed elemental, isotopic, and chemical speciation analysis of aerosol particulate matter (PM) can provide valuable information on PM sources, atmospheric processing, and climate forcing. Certain PM sources may best be resolved using trace metal signatures, and elemental and isotopic fingerprints can supplement and enhance molecular maker analysis of PM for source apportionment modeling. In the search for toxicologically relevant components of PM, health studies are increasingly demanding more comprehensive characterization schemes. It is also clear that total metal analysis is at best a poor surrogate for the bioavailable component, and analytical techniques that address the labile component or specific chemical species are needed. Recent sampling and analytical developments advanced by the project team have facilitated comprehensive characterization of even very small masses of atmospheric PM. Historically; this level of detail was rarely achieved due to limitations in analytical sensitivity and a lack of awareness concerning the potential for contamination. These advances have enabled the coupling of advanced chemical characterization to vital field sampling approaches that typically supply only very limited PM mass; e.g. (1) particle size-resolved sampling; (2) personal sampler collections; and (3) fine temporal scale sampling. The analytical tools that our research group is applying include: (1) sector field (high-resolution-HR) ICP-MS, (2) liquid waveguide long-path spectrophotometry (LWG-LPS), and (3) synchrotron x-ray absorption spectroscopy (sXAS). When coupled with an efficient and validated solubilization method, the HR-ICP-MS can provide quantitative elemental information on over 50 elements in microgram quantities of PM. The high mass resolution and enhanced signal-to-noise of HR-ICP-MS significantly advance data quality and quantity over that possible with traditional quadrupole ICP-MS. The LWG-LPS system enables an assessment of the soluble/labile components of PM, while simultaneously providing critical oxidation state speciation data. Importantly, the LWG- LPS can be deployed in a semi-real-time configuration to probe fine temporal scale variations in atmospheric processing or sources of PM. The sXAS is providing complementary oxidation state speciation of bulk PM. Using examples from our research; we will illustrate the capabilities and applications of these new methods.
Innovative Near Real-Time Data Dissemination Tools Developed by the Space Weather Research Center
NASA Astrophysics Data System (ADS)
Maddox, Marlo M.; Mullinix, Richard; Mays, M. Leila; Kuznetsova, Maria; Zheng, Yihua; Pulkkinen, Antti; Rastaetter, Lutz
2013-03-01
Access to near real-time and real-time space weather data is essential to accurately specifying and forecasting the space environment. The Space Weather Research Center at NASA Goddard Space Flight Center's Space Weather Laboratory provides vital space weather forecasting services primarily to NASA robotic mission operators, as well as external space weather stakeholders including the Air Force Weather Agency. A key component in this activity is the iNtegrated Space Weather Analysis System which is a joint development project at NASA GSFC between the Space Weather Laboratory, Community Coordinated Modeling Center, Applied Engineering & Technology Directorate, and NASA HQ Office Of Chief Engineer. The iSWA system was developed to address technical challenges in acquiring and disseminating space weather environment information. A key design driver for the iSWA system was to generate and present vast amounts of space weather resources in an intuitive, user-configurable, and adaptable format - thus enabling users to respond to current and future space weather impacts as well as enabling post-impact analysis. Having access to near real-time and real-time data is essential to not only ensuring that relevant observational data is available for analysis - but also in ensuring that models can be driven with the requisite input parameters at proper and efficient temporal and spacial resolutions. The iSWA system currently manages over 300 unique near-real and real-time data feeds from various sources consisting of both observational and simulation data. A comprehensive suite of actionable space weather analysis tools and products are generated and provided utilizing a mixture of the ingested data - enabling new capabilities in quickly assessing past, present, and expected space weather effects. This paper will highlight current and future iSWA system capabilities including the utilization of data from the Solar Dynamics Observatory mission. http://iswa.gsfc.nasa.gov/
Assessing the environmental impacts of aircraft noise and emissions
NASA Astrophysics Data System (ADS)
Mahashabde, Anuja; Wolfe, Philip; Ashok, Akshay; Dorbian, Christopher; He, Qinxian; Fan, Alice; Lukachko, Stephen; Mozdzanowska, Aleksandra; Wollersheim, Christoph; Barrett, Steven R. H.; Locke, Maryalice; Waitz, Ian A.
2011-01-01
With the projected growth in demand for commercial aviation, many anticipate increased environmental impacts associated with noise, air quality, and climate change. Therefore, decision-makers and stakeholders are seeking policies, technologies, and operational procedures that balance environmental and economic interests. The main objective of this paper is to address shortcomings in current decision-making practices for aviation environmental policies. We review knowledge of the noise, air quality, and climate impacts of aviation, and demonstrate how including environmental impact assessment and quantifying uncertainties can enable a more comprehensive evaluation of aviation environmental policies. A comparison is presented between the cost-effectiveness analysis currently used for aviation environmental policy decision-making and an illustrative cost-benefit analysis. We focus on assessing a subset of the engine NO X emissions certification stringency options considered at the eighth meeting of the International Civil Aviation Organization’s Committee on Aviation Environmental Protection. The FAA Aviation environmental Portfolio Management Tool (APMT) is employed to conduct the policy assessments. We show that different conclusions may be drawn about the same policy options depending on whether benefits and interdependencies are estimated in terms of health and welfare impacts versus changes in NO X emissions inventories as is the typical practice. We also show that these conclusions are sensitive to a variety of modeling uncertainties. While our more comprehensive analysis makes the best policy option less clear, it represents a more accurate characterization of the scientific and economic uncertainties underlying impacts and the policy choices.
Audrézet, Marie Pierre; Munck, Anne; Scotet, Virginie; Claustres, Mireille; Roussey, Michel; Delmas, Dominique; Férec, Claude; Desgeorges, Marie
2015-02-01
Newborn screening (NBS) for cystic fibrosis (CF) was implemented throughout France in 2002. It involves a four-tiered procedure: immunoreactive trypsin (IRT)/DNA/IRT/sweat test [corrected] was implemented throughout France in 2002. The aim of this study was to assess the performance of molecular CFTR gene analysis from the French NBS cohort, to evaluate CF incidence, mutation detection rate, and allelic heterogeneity. During the 8-year period, 5,947,148 newborns were screened for cystic fibrosis. The data were collected by the Association Française pour le Dépistage et la Prévention des Handicaps de l'Enfant. The mutations identified were classified into four groups based on their potential for causing disease, and a diagnostic algorithm was proposed. Combining the genetic and sweat test results, 1,160 neonates were diagnosed as having cystic fibrosis. The corresponding incidence, including both the meconium ileus (MI) and false-negative cases, was calculated at 1 in 4,726 live births. The CF30 kit, completed with a comprehensive CFTR gene analysis, provides an excellent detection rate of 99.77% for the mutated alleles, enabling the identification of a complete genotype in 99.55% of affected neonates. With more than 200 different mutations characterized, we confirmed the French allelic heterogeneity. The very good sensitivity, specificity, and positive predictive value obtained suggest that the four-tiered IRT/DNA/IRT/sweat test procedure may provide an effective strategy for newborn screening for cystic fibrosis.
Next-Generation Molecular Testing of Newborn Dried Blood Spots for Cystic Fibrosis.
Lefterova, Martina I; Shen, Peidong; Odegaard, Justin I; Fung, Eula; Chiang, Tsoyu; Peng, Gang; Davis, Ronald W; Wang, Wenyi; Kharrazi, Martin; Schrijver, Iris; Scharfe, Curt
2016-03-01
Newborn screening for cystic fibrosis enables early detection and management of this debilitating genetic disease. Implementing comprehensive CFTR analysis using Sanger sequencing as a component of confirmatory testing of all screen-positive newborns has remained impractical due to relatively lengthy turnaround times and high cost. Here, we describe CFseq, a highly sensitive, specific, rapid (<3 days), and cost-effective assay for comprehensive CFTR gene analysis from dried blood spots, the common newborn screening specimen. The unique design of CFseq integrates optimized dried blood spot sample processing, a novel multiplex amplification method from as little as 1 ng of genomic DNA, and multiplex next-generation sequencing of 96 samples in a single run to detect all relevant CFTR mutation types. Sequence data analysis utilizes publicly available software supplemented by an expert-curated compendium of >2000 CFTR variants. Validation studies across 190 dried blood spots demonstrated 100% sensitivity and a positive predictive value of 100% for single-nucleotide variants and insertions and deletions and complete concordance across the polymorphic poly-TG and consecutive poly-T tracts. Additionally, we accurately detected both a known exon 2,3 deletion and a previously undetected exon 22,23 deletion. CFseq is thus able to replace all existing CFTR molecular assays with a single robust, definitive assay at significant cost and time savings and could be adapted to high-throughput screening of other inherited conditions. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Cytogenetics of melanoma and nonmelanoma skin cancer.
Carless, Melanie A; Griffiths, Lyn R
2014-01-01
Cytogenetic analysis of melanoma and nonmelanoma skin cancers has revealed recurrent aberrations, the frequency of which is reflective of malignant potential. Highly aberrant karyotypes are seen in melanoma, squamous cell carcinoma, actinic keratosis, Merkel cell carcinoma and cutaneous lymphomas with more stable karyotypes seen in basal cell carcinoma, keratoacanthoma, Bowen's disease and dermatofibrosarcoma protuberans. Some aberrations are common among a number of skin cancer types including rearrangements and numerical abnormalities of chromosome 1, -3p, +3q, partial or entire trisomy 6, trisomy 7, +8q, -9p, +9q, partial or entire loss of chromosome 10, -17p, +17q and partial or entire gain of chromosome 20. Combination of cytogenetic analysis with other molecular genetic techniques has enabled the identification of not only aberrant chromosomal regions, but also the genes that contribute to a malignant phenotype. This review provides a comprehensive summary of the pertinent cytogenetic aberrations associated with a variety of melanoma and nonmelanoma skin cancers.
Pasi, Marco; Maddocks, John H.; Lavery, Richard
2015-01-01
Microsecond molecular dynamics simulations of B-DNA oligomers carried out in an aqueous environment with a physiological salt concentration enable us to perform a detailed analysis of how potassium ions interact with the double helix. The oligomers studied contain all 136 distinct tetranucleotides and we are thus able to make a comprehensive analysis of base sequence effects. Using a recently developed curvilinear helicoidal coordinate method we are able to analyze the details of ion populations and densities within the major and minor grooves and in the space surrounding DNA. The results show higher ion populations than have typically been observed in earlier studies and sequence effects that go beyond the nature of individual base pairs or base pair steps. We also show that, in some special cases, ion distributions converge very slowly and, on a microsecond timescale, do not reflect the symmetry of the corresponding base sequence. PMID:25662221
Performance and Sizing Tool for Quadrotor Biplane Tailsitter UAS
NASA Astrophysics Data System (ADS)
Strom, Eric
The Quadrotor-Biplane-Tailsitter (QBT) configuration is the basis for a mechanically simplistic rotorcraft capable of both long-range, high-speed cruise as well as hovering flight. This work presents the development and validation of a set of preliminary design tools built specifically for this aircraft to enable its further development, including: a QBT weight model, preliminary sizing framework, and vehicle analysis tools. The preliminary sizing tool presented here shows the advantage afforded by QBT designs in missions with aggressive cruise requirements, such as offshore wind turbine inspections, wherein transition from a quadcopter configuration to a QBT allows for a 5:1 trade of battery weight for wing weight. A 3D, unsteady panel method utilizing a nonlinear implementation of the Kutta-Joukowsky condition is also presented as a means of computing aerodynamic interference effects and, through the implementation of rotor, body, and wing geometry generators, is prepared for coupling with a comprehensive rotor analysis package.
[Empowerment of women in difficult life situations: the BIG project].
Rütten, A; Röger, U; Abu-Omar, K; Frahsa, A
2008-12-01
BIG is a project for the promotion of physical activity among women in difficult life situations. Following the main health promotion principles of the WHO, the women shall be enabled or empowered to take control of determinants of their health. A comprehensive participatory approach was applied and women were included in planning, implementing and evaluating the project. For measuring the effects of BIG on the empowerment of participating women, qualitative semi-structured interviews with 15 women participating in BIG were conducted. For data analysis, qualitative content analysis was used. Results showed the empowerment of the women on the individual level as they gained different competencies and perceived self-efficacy. These effects were supported through the empowerment process on the organizational and community levels where women gained control over their life situations and over policies influencing them. Therefore, the participatory approach of BIG is a key success factor for empowerment promotion of women in difficult life situations.
Computational knowledge integration in biopharmaceutical research.
Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim
2003-09-01
An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.
NASA Technical Reports Server (NTRS)
Padovan, Joe
1986-01-01
In a three part series of papers, a generalized finite element analysis scheme is developed to handle the steady and transient response of moving/rolling nonlinear viscoelastic structure. This paper considers the development of the moving/rolling element strategy, including the effects of large deformation kinematics and viscoelasticity modelled by fractional integro-differential operators. To improve the solution strategy, a special hierarchical constraint procedure is developed for the case of steady rolling/translating as well as a transient scheme involving the use of a Grunwaldian representation of the fractional operator. In the second and third parts of the paper, 3-D extensions are developed along with transient contact strategies enabling the handling of impacts with obstructions. Overall, the various developments are benchmarked via comprehensive 2- and 3-D simulations. These are correlated with experimental data to define modelling capabilities.
Cunningham, Charles E; Chen, Yvonne; Vaillancourt, Tracy; Rimas, Heather; Deal, Ken; Cunningham, Lesley J; Ratcliffe, Jenna
2015-01-01
Adaptive choice-based conjoint analysis was used to study the anti-cyberbullying program preferences of 1,004 university students. More than 60% reported involvement in cyberbullying as witnesses (45.7%), victims (5.7%), perpetrator-victims (4.9%), or perpetrators (4.5%). Men were more likely to report involvement as perpetrators and perpetrator-victims than were women. Students recommended advertisements featuring famous people who emphasized the impact of cyberbullying on victims. They preferred a comprehensive approach teaching skills to prevent cyberbullying, encouraging students to report incidents, enabling anonymous online reporting, and terminating the internet privileges of students involved as perpetrators. Those who cyberbully were least likely, and victims of cyberbullying were most likely, to support an approach combining prevention and consequences. Simulations introducing mandatory reporting, suspensions, or police charges predicted a substantial reduction in the support of uninvolved students, witnesses, victims, and perpetrators. © 2014 Wiley Periodicals, Inc.
Measurement of sulfur isotope compositions by tunable laser spectroscopy of SO2.
Christensen, Lance E; Brunner, Benjamin; Truong, Kasey N; Mielke, Randall E; Webster, Christopher R; Coleman, Max
2007-12-15
Sulfur isotope measurements offer comprehensive information on the origin and history of natural materials. Tunable laser spectroscopy is a powerful analytical technique for isotope analysis that has proven itself readily adaptable for in situ terrestrial and planetary measurements. Measurements of delta(34)S in SO2 were made using tunable laser spectroscopy of combusted gas samples from six sulfur-bearing solids with delta(34)S ranging from -34 to +22 per thousand (also measured with mass spectrometry). Standard deviation between laser and mass spectrometer measurements was 3.7 per thousand for sample sizes of 200 +/- 75 nmol SO(2). Although SO(2)(g) decreased 9% over 15 min upon entrainment in the analysis cell from wall uptake, observed fractionation was insignificant (+0.2 +/- 0.6 per thousand). We also describe a strong, distinct (33)SO(2) rovibrational transition in the same spectral region, which may enable simultaneous delta(34)S and Delta(33)S measurements.
[Development of indicators for evaluating public dental healthcare services].
Bueno, Vera Lucia Ribeiro de Carvalho; Cordoni Júnior, Luiz; Mesas, Arthur Eumann
2011-07-01
The objective of this article is to describe and analyze the development of indicators used to identify strengths and deficiencies in public dental healthcare services in the municipality of Cambé, Paraná. The methodology employed was a historical-organizational case study. A theoretical model of the service was developed for evaluation planning. To achieve this, information was collected from triangulation of methods (interviews, document analysis and observation). A matrix was then developed which presents analysis dimensions, criteria, indicators, punctuation, parameters and sources of information. Three workshops were staged during the process with local service professionals in order to verify whether both the logical model and the matrix represented the service adequately. The period for collecting data was from November 2006 through July, 2007. As a result, a flowchart of the organization of the public dental health service and a matrix with two-dimensional analysis, twelve criteria and twenty-four indicators, was developed. The development of indicators favoring the participation of people involved with the practice has enabled more comprehensive and realistic evaluation planning.
Two-Phase Extraction for Comprehensive Analysis of the Plant Metabolome by NMR.
Schripsema, Jan; Dagnino, Denise
2018-01-01
Metabolomics is the area of research, which strives to obtain complete metabolic fingerprints, to detect differences between them, and to provide hypothesis to explain those differences [1]. But obtaining complete metabolic fingerprints is not an easy task. Metabolite extraction is a key step during this process, and much research has been devoted to finding the best solvent mixture to extract as much metabolites as possible.Here a procedure is described for analysis of both polar and apolar metabolites using a two-phase extraction system. D 2 O and CDCl 3 are the solvents of choice, and their major advantage is that, for the identification of the compounds, standard databases can be used because D 2 O and CDCl 3 are the solvents most commonly used for pure compound NMR spectra. The procedure enables the absolute quantification of components via the addition of suitable internal standards. The extracts are also suitable for further analysis with other systems like LC-MS or GC-MS.
A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie.
Hanke, Michael; Baumgartner, Florian J; Ibe, Pierre; Kaule, Falko R; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg
2014-01-01
Here we present a high-resolution functional magnetic resonance (fMRI) dataset - 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film ("Forrest Gump"). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures - from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.
GAMES identifies and annotates mutations in next-generation sequencing projects.
Sana, Maria Elena; Iascone, Maria; Marchetti, Daniela; Palatini, Jeff; Galasso, Marco; Volinia, Stefano
2011-01-01
Next-generation sequencing (NGS) methods have the potential for changing the landscape of biomedical science, but at the same time pose several problems in analysis and interpretation. Currently, there are many commercial and public software packages that analyze NGS data. However, the limitations of these applications include output which is insufficiently annotated and of difficult functional comprehension to end users. We developed GAMES (Genomic Analysis of Mutations Extracted by Sequencing), a pipeline aiming to serve as an efficient middleman between data deluge and investigators. GAMES attains multiple levels of filtering and annotation, such as aligning the reads to a reference genome, performing quality control and mutational analysis, integrating results with genome annotations and sorting each mismatch/deletion according to a range of parameters. Variations are matched to known polymorphisms. The prediction of functional mutations is achieved by using different approaches. Overall GAMES enables an effective complexity reduction in large-scale DNA-sequencing projects. GAMES is available free of charge to academic users and may be obtained from http://aqua.unife.it/GAMES.
Santos, Sandra; Cadime, Irene; Viana, Fernanda L; Chaves-Sousa, Séli; Gayo, Elena; Maia, José; Ribeiro, Iolanda
2017-02-01
Reading comprehension assessment should rely on valid instruments that enable adequate conclusions to be taken regarding students' reading comprehension performance. In this article, two studies were conducted to collect validity evidence for the vertically scaled forms of two Tests of Reading Comprehension for Portuguese elementary school students in the second to fourth grades, one with narrative texts (TRC-n) and another with expository ones (TRC-e). Two samples of 950 and 990 students participated in Study 1, the study of the dimensionality of the TRC-n and TRC-e forms, respectively. Confirmatory factor analyses provided evidence of an acceptable fit for the one-factor solution for all test forms. Study 2 included 218 students to collect criterion-related validity. The scores obtained in each of the test forms were significantly correlated with the ones obtained in other reading comprehension measures and with the results obtained in oral reading fluency, vocabulary and working memory tests. Evidence suggests that the test forms are valid measures of reading comprehension. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Transcriptome and proteomic analysis of mango (Mangifera indica Linn) fruits.
Wu, Hong-xia; Jia, Hui-min; Ma, Xiao-wei; Wang, Song-biao; Yao, Quan-sheng; Xu, Wen-tian; Zhou, Yi-gang; Gao, Zhong-shan; Zhan, Ru-lin
2014-06-13
Here we used Illumina RNA-seq technology for transcriptome sequencing of a mixed fruit sample from 'Zill' mango (Mangifera indica Linn) fruit pericarp and pulp during the development and ripening stages. RNA-seq generated 68,419,722 sequence reads that were assembled into 54,207 transcripts with a mean length of 858bp, including 26,413 clusters and 27,794 singletons. A total of 42,515(78.43%) transcripts were annotated using public protein databases, with a cut-off E-value above 10(-5), of which 35,198 and 14,619 transcripts were assigned to gene ontology terms and clusters of orthologous groups respectively. Functional annotation against the Kyoto Encyclopedia of Genes and Genomes database identified 23,741(43.79%) transcripts which were mapped to 128 pathways. These pathways revealed many previously unknown transcripts. We also applied mass spectrometry-based transcriptome data to characterize the proteome of ripe fruit. LC-MS/MS analysis of the mango fruit proteome was using tandem mass spectrometry (MS/MS) in an LTQ Orbitrap Velos (Thermo) coupled online to the HPLC. This approach enabled the identification of 7536 peptides that matched 2754 proteins. Our study provides a comprehensive sequence for a systemic view of transcriptome during mango fruit development and the most comprehensive fruit proteome to date, which are useful for further genomics research and proteomic studies. Our study provides a comprehensive sequence for a systemic view of both the transcriptome and proteome of mango fruit, and a valuable reference for further research on gene expression and protein identification. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
Rosas, Samuel; Krill, Michael K; Amoo-Achampong, Kelms; Kwon, KiHyun; Nwachukwu, Benedict U; McCormick, Frank
2017-08-01
Clinical examination of the shoulder joint has gained attention as clinicians aim to use an evidence-based examination of the biceps tendon, with the desire for a proper diagnosis while minimizing costly imaging procedures. The purpose of this study is to create a decision tree analysis that enables the development of a clinical algorithm for diagnosing long head of biceps (LHB) pathology. A literature review of Level I and II diagnostic studies was conducted to extract characteristics of clinical tests for LHB pathology through a systematic review of PubMed, Medline, Ovid, and Cochrane Review databases. Tests were combined in series and parallel to determine sensitivities and specificities, and positive and negative likelihood ratios were determined for each combination using a subjective pretest probability. The "gold standard" for diagnosis in all included studies was arthroscopy or arthrotomy. The optimal testing modality was use of the uppercut test combined with the tenderness to palpation of the biceps tendon test. This combination achieved a sensitivity of 88.4% when performed in parallel and a specificity of 93.8% when performed in series. These tests used in combination optimize post-test probability accuracy greater than any single individual test. Performing the uppercut test and biceps groove tenderness to palpation test together has the highest sensitivity and specificity of known physical examinations maneuvers to aid in the diagnosis of LHB pathology compared with diagnostic arthroscopy (practical, evidence-based, comprehensive examination). A decision tree analysis aides in the practical, evidence-based, comprehensive examination diagnostic accuracy post-testing based on the ordinal scale pretest probability. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Podshivalov, L; Fischer, A; Bar-Yoseph, P Z
2011-04-01
This paper describes a new alternative for individualized mechanical analysis of bone trabecular structure. This new method closes the gap between the classic homogenization approach that is applied to macro-scale models and the modern micro-finite element method that is applied directly to micro-scale high-resolution models. The method is based on multiresolution geometrical modeling that generates intermediate structural levels. A new method for estimating multiscale material properties has also been developed to facilitate reliable and efficient mechanical analysis. What makes this method unique is that it enables direct and interactive analysis of the model at every intermediate level. Such flexibility is of principal importance in the analysis of trabecular porous structure. The method enables physicians to zoom-in dynamically and focus on the volume of interest (VOI), thus paving the way for a large class of investigations into the mechanical behavior of bone structure. This is one of the very few methods in the field of computational bio-mechanics that applies mechanical analysis adaptively on large-scale high resolution models. The proposed computational multiscale FE method can serve as an infrastructure for a future comprehensive computerized system for diagnosis of bone structures. The aim of such a system is to assist physicians in diagnosis, prognosis, drug treatment simulation and monitoring. Such a system can provide a better understanding of the disease, and hence benefit patients by providing better and more individualized treatment and high quality healthcare. In this paper, we demonstrate the feasibility of our method on a high-resolution model of vertebra L3. Copyright © 2010 Elsevier Inc. All rights reserved.
High resolution production water footprints of the United States
NASA Astrophysics Data System (ADS)
Marston, L.; Yufei, A.; Konar, M.; Mekonnen, M.; Hoekstra, A. Y.
2017-12-01
The United States is the largest producer and consumer of goods and services in the world. Rainfall, surface water supplies, and groundwater aquifers represent a fundamental input to this economic production. Despite the importance of water resources to economic activity, we do not have consistent information on water use for specific locations and economic sectors. A national, high-resolution database of water use by sector would provide insight into US utilization and dependence on water resources for economic production. To this end, we calculate the water footprint of over 500 food, energy, mining, services, and manufacturing industries and goods produced in the US. To do this, we employ a data intensive approach that integrates water footprint and input-output techniques into a novel methodological framework. This approach enables us to present the most detailed and comprehensive water footprint analysis of any country to date. This study broadly contributes to our understanding of water in the US economy, enables supply chain managers to assess direct and indirect water dependencies, and provides opportunities to reduce water use through benchmarking.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H; Montesinos-López, José C; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-05-05
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. Copyright © 2017 Montesinos-López et al.
Microfluidic droplet enrichment for targeted sequencing
Eastburn, Dennis J.; Huang, Yong; Pellegrino, Maurizio; Sciambi, Adam; Ptáček, Louis J.; Abate, Adam R.
2015-01-01
Targeted sequence enrichment enables better identification of genetic variation by providing increased sequencing coverage for genomic regions of interest. Here, we report the development of a new target enrichment technology that is highly differentiated from other approaches currently in use. Our method, MESA (Microfluidic droplet Enrichment for Sequence Analysis), isolates genomic DNA fragments in microfluidic droplets and performs TaqMan PCR reactions to identify droplets containing a desired target sequence. The TaqMan positive droplets are subsequently recovered via dielectrophoretic sorting, and the TaqMan amplicons are removed enzymatically prior to sequencing. We demonstrated the utility of this approach by generating an average 31.6-fold sequence enrichment across 250 kb of targeted genomic DNA from five unique genomic loci. Significantly, this enrichment enabled a more comprehensive identification of genetic polymorphisms within the targeted loci. MESA requires low amounts of input DNA, minimal prior locus sequence information and enriches the target region without PCR bias or artifacts. These features make it well suited for the study of genetic variation in a number of research and diagnostic applications. PMID:25873629
Cell wall evolution and diversity
Fangel, Jonatan U.; Ulvskov, Peter; Knox, J. P.; Mikkelsen, Maria D.; Harholt, Jesper; Popper, Zoë A.; Willats, William G.T.
2012-01-01
Plant cell walls display a considerable degree of diversity in their compositions and molecular architectures. In some cases the functional significance of a particular cell wall type appears to be easy to discern: secondary cells walls are often reinforced with lignin that provides durability; the thin cell walls of pollen tubes have particular compositions that enable their tip growth; lupin seed cell walls are characteristically thickened with galactan used as a storage polysaccharide. However, more frequently the evolutionary mechanisms and selection pressures that underpin cell wall diversity and evolution are unclear. For diverse green plants (chlorophytes and streptophytes) the rapidly increasing availability of transcriptome and genome data sets, the development of methods for cell wall analyses which require less material for analysis, and expansion of molecular probe sets, are providing new insights into the diversity and occurrence of cell wall polysaccharides and associated biosynthetic genes. Such research is important for refining our understanding of some of the fundamental processes that enabled plants to colonize land and to subsequently radiate so comprehensively. The study of cell wall structural diversity is also an important aspect of the industrial utilization of global polysaccharide bio-resources. PMID:22783271
SITE TECHNOLOGY CAPSULE: GIS\\KEY ENVIRONMENTAL DATA MANAGEMENT SYSTEM
GIS/Key™ is a comprehensive environmental database management system that integrates site data and graphics, enabling the user to create geologic cross-sections; boring logs; potentiometric, isopleth, and structure maps; summary tables; and hydrographs. GIS/Key™ is menu-driven an...
Ranninger, Christina; Rurik, Marc; Limonciel, Alice; Ruzek, Silke; Reischl, Roland; Wilmes, Anja; Jennings, Paul; Hewitt, Philip; Dekant, Wolfgang; Kohlbacher, Oliver; Huber, Christian G.
2015-01-01
Untargeted metabolomics has the potential to improve the predictivity of in vitro toxicity models and therefore may aid the replacement of expensive and laborious animal models. Here we describe a long term repeat dose nephrotoxicity study conducted on the human renal proximal tubular epithelial cell line, RPTEC/TERT1, treated with 10 and 35 μmol·liter−1 of chloroacetaldehyde, a metabolite of the anti-cancer drug ifosfamide. Our study outlines the establishment of an automated and easy to use untargeted metabolomics workflow for HPLC-high resolution mass spectrometry data. Automated data analysis workflows based on open source software (OpenMS, KNIME) enabled a comprehensive and reproducible analysis of the complex and voluminous metabolomics data produced by the profiling approach. Time- and concentration-dependent responses were clearly evident in the metabolomic profiles. To obtain a more comprehensive picture of the mode of action, transcriptomics and proteomics data were also integrated. For toxicity profiling of chloroacetaldehyde, 428 and 317 metabolite features were detectable in positive and negative modes, respectively, after stringent removal of chemical noise and unstable signals. Changes upon treatment were explored using principal component analysis, and statistically significant differences were identified using linear models for microarray assays. The analysis revealed toxic effects only for the treatment with 35 μmol·liter−1 for 3 and 14 days. The most regulated metabolites were glutathione and metabolites related to the oxidative stress response of the cells. These findings are corroborated by proteomics and transcriptomics data, which show, among other things, an activation of the Nrf2 and ATF4 pathways. PMID:26055719
NASA Astrophysics Data System (ADS)
Dufaux, Frederic
2011-06-01
The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.
Paek, Hye-Jin; Hilyard, Karen; Freimuth, Vicki; Barge, J Kevin; Mindlin, Michele
2010-06-01
Recent natural and human-caused disasters have awakened public health officials to the importance of emergency preparedness. Guided by health behavior and media effects theories, the analysis of a statewide survey in Georgia reveals that self-efficacy, subjective norm, and emergency news exposure are positively associated with the respondents' possession of emergency items and their stages of emergency preparedness. Practical implications suggest less focus on demographics as the sole predictor of emergency preparedness and more comprehensive measures of preparedness, including both a person's cognitive stage of preparedness and checklists of emergency items on hand. We highlight the utility of theory-based approaches for understanding and predicting public emergency preparedness as a way to enable more effective health and risk communication.
A bioinformatics roadmap for the human vaccines project.
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
2017-06-01
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
MIPSPlantsDB—plant database resource for integrative and comparative plant genome research
Spannagl, Manuel; Noubibou, Octave; Haase, Dirk; Yang, Li; Gundlach, Heidrun; Hindemitt, Tobias; Klee, Kathrin; Haberer, Georg; Schoof, Heiko; Mayer, Klaus F. X.
2007-01-01
Genome-oriented plant research delivers rapidly increasing amount of plant genome data. Comprehensive and structured information resources are required to structure and communicate genome and associated analytical data for model organisms as well as for crops. The increase in available plant genomic data enables powerful comparative analysis and integrative approaches. PlantsDB aims to provide data and information resources for individual plant species and in addition to build a platform for integrative and comparative plant genome research. PlantsDB is constituted from genome databases for Arabidopsis, Medicago, Lotus, rice, maize and tomato. Complementary data resources for cis elements, repetive elements and extensive cross-species comparisons are implemented. The PlantsDB portal can be reached at . PMID:17202173
On the Computation of Comprehensive Boolean Gröbner Bases
NASA Astrophysics Data System (ADS)
Inoue, Shutaro
We show that a comprehensive Boolean Gröbner basis of an ideal I in a Boolean polynomial ring B (bar A,bar X) with main variables bar X and parameters bar A can be obtained by simply computing a usual Boolean Gröbner basis of I regarding both bar X and bar A as variables with a certain block term order such that bar X ≫ bar A. The result together with a fact that a finite Boolean ring is isomorphic to a direct product of the Galois field mathbb{GF}_2 enables us to compute a comprehensive Boolean Gröbner basis by only computing corresponding Gröbner bases in a polynomial ring over mathbb{GF}_2. Our implementation in a computer algebra system Risa/Asir shows that our method is extremely efficient comparing with existing computation algorithms of comprehensive Boolean Gröbner bases.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
sRNAdb: A small non-coding RNA database for gram-positive bacteria
2012-01-01
Background The class of small non-coding RNA molecules (sRNA) regulates gene expression by different mechanisms and enables bacteria to mount a physiological response due to adaptation to the environment or infection. Over the last decades the number of sRNAs has been increasing rapidly. Several databases like Rfam or fRNAdb were extended to include sRNAs as a class of its own. Furthermore new specialized databases like sRNAMap (gram-negative bacteria only) and sRNATarBase (target prediction) were established. To the best of the authors’ knowledge no database focusing on sRNAs from gram-positive bacteria is publicly available so far. Description In order to understand sRNA’s functional and phylogenetic relationships we have developed sRNAdb and provide tools for data analysis and visualization. The data compiled in our database is assembled from experiments as well as from bioinformatics analyses. The software enables comparison and visualization of gene loci surrounding the sRNAs of interest. To accomplish this, we use a client–server based approach. Offline versions of the database including analyses and visualization tools can easily be installed locally on the user’s computer. This feature facilitates customized local addition of unpublished sRNA candidates and related information such as promoters or terminators using tab-delimited files. Conclusion sRNAdb allows a user-friendly and comprehensive comparative analysis of sRNAs from available sequenced gram-positive prokaryotic replicons. Offline versions including analysis and visualization tools facilitate complex user specific bioinformatics analyses. PMID:22883983
NASA Astrophysics Data System (ADS)
Weisbrod, Chad R.; Kaiser, Nathan K.; Syka, John E. P.; Early, Lee; Mullen, Christopher; Dunyach, Jean-Jacques; English, A. Michelle; Anderson, Lissa C.; Blakney, Greg T.; Shabanowitz, Jeffrey; Hendrickson, Christopher L.; Marshall, Alan G.; Hunt, Donald F.
2017-09-01
High resolution mass spectrometry is a key technology for in-depth protein characterization. High-field Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) enables high-level interrogation of intact proteins in the most detail to date. However, an appropriate complement of fragmentation technologies must be paired with FTMS to provide comprehensive sequence coverage, as well as characterization of sequence variants, and post-translational modifications. Here we describe the integration of front-end electron transfer dissociation (FETD) with a custom-built 21 tesla FT-ICR mass spectrometer, which yields unprecedented sequence coverage for proteins ranging from 2.8 to 29 kDa, without the need for extensive spectral averaging (e.g., 60% sequence coverage for apo-myoglobin with four averaged acquisitions). The system is equipped with a multipole storage device separate from the ETD reaction device, which allows accumulation of multiple ETD fragment ion fills. Consequently, an optimally large product ion population is accumulated prior to transfer to the ICR cell for mass analysis, which improves mass spectral signal-to-noise ratio, dynamic range, and scan rate. We find a linear relationship between protein molecular weight and minimum number of ETD reaction fills to achieve optimum sequence coverage, thereby enabling more efficient use of instrument data acquisition time. Finally, real-time scaling of the number of ETD reactions fills during method-based acquisition is shown, and the implications for LC-MS/MS top-down analysis are discussed. [Figure not available: see fulltext.
Toward the 1,000 dollars human genome.
Bennett, Simon T; Barnes, Colin; Cox, Anthony; Davies, Lisa; Brown, Clive
2005-06-01
Revolutionary new technologies, capable of transforming the economics of sequencing, are providing an unparalleled opportunity to analyze human genetic variation comprehensively at the whole-genome level within a realistic timeframe and at affordable costs. Current estimates suggest that it would cost somewhere in the region of 30 million US dollars to sequence an entire human genome using Sanger-based sequencing, and on one machine it would take about 60 years. Solexa is widely regarded as a company with the necessary disruptive technology to be the first to achieve the ultimate goal of the so-called 1,000 dollars human genome - the conceptual cost-point needed for routine analysis of individual genomes. Solexa's technology is based on completely novel sequencing chemistry capable of sequencing billions of individual DNA molecules simultaneously, a base at a time, to enable highly accurate, low cost analysis of an entire human genome in a single experiment. When applied over a large enough genomic region, these new approaches to resequencing will enable the simultaneous detection and typing of known, as well as unknown, polymorphisms, and will also offer information about patterns of linkage disequilibrium in the population being studied. Technological progress, leading to the advent of single-molecule-based approaches, is beginning to dramatically drive down costs and increase throughput to unprecedented levels, each being several orders of magnitude better than that which is currently available. A new sequencing paradigm based on single molecules will be faster, cheaper and more sensitive, and will permit routine analysis at the whole-genome level.
Counter Piracy: A More Comprehensive Approach
2012-04-25
almost non-existent. The lack of governance allowed local and foreign fishing vessels to take advantage and overfish Somali waters. The local Any...piracy. 18 populace was powerless to stop them from overfishing those grounds that once provided subsistence.67 This situation enabled the
The iPlant collaborative: cyberinfrastructure for enabling data to discovery for the life sciences
USDA-ARS?s Scientific Manuscript database
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identify management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning m...
GIS\\KEY™ ENVIRONMENTAL DATA MANAGEMENT SYSTEM - INNOVATIVE TECHNOLOGY EVALUATION REPORT
GIS/Key™ is a comprehensive environmental database management system that integrates site data and graphics, enabling the user to create geologic cross-sections; boring logs; potentiometric, isopleth, and structure maps; summary tables; and hydrographs. GIS/Key™ is menu-driven an...
The iPlant collaborative: cyberinfrastructure for plant biology
USDA-ARS?s Scientific Manuscript database
The iPlant Collaborative (iPlant) is a United States National Science Foundation (NSF)funded project that aims to create an innovative, comprehensive, and foundational cyberinfrastructure in support of plant biology research (PSCIC, 2006). iPlant is developing cyberinfrastructure that uniquely enabl...
NASA Astrophysics Data System (ADS)
Kotlarski, Sven; Gutiérrez, José M.; Boberg, Fredrik; Bosshard, Thomas; Cardoso, Rita M.; Herrera, Sixto; Maraun, Douglas; Mezghani, Abdelkader; Pagé, Christian; Räty, Olle; Stepanek, Petr; Soares, Pedro M. M.; Szabo, Peter
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of downscaling methods. Such assessments can be expected to crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling, observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. We here present a comprehensive assessment of the influence of uncertainties in observational reference data and of scale-related issues on several of the above-mentioned aspects. First, temperature and precipitation characteristics as simulated by a set of reanalysis-driven EURO-CORDEX RCM experiments are validated against three different gridded reference data products, namely (1) the EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. The analysis reveals a considerable influence of the choice of the reference data on the evaluation results, especially for precipitation. It is also illustrated how differences between the reference data sets influence the ranking of RCMs according to a comprehensive set of performance measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronevich, Joseph Allen; Balch, Dorian K.; San Marchi, Christopher W.
2015-12-01
This project was intended to enable SNL-CA to produce appropriate specimens of relevant stainless steels for testing and perform baseline testing of weld heat-affected zone and weld fusion zone. One of the key deliverables in this project was to establish a procedure for fracture testing stainless steel weld fusion zone and heat affected zones that were pre-charged with hydrogen. Following the establishment of the procedure, a round robin was planned between SNL-CA and SRNL to ensure testing consistency between laboratories. SNL-CA and SRNL would then develop a comprehensive test plan, which would include tritium exposures of several years at SRNLmore » on samples delivered by SNL-CA. Testing would follow the procedures developed at SNL-CA. SRNL will also purchase tritium charging vessels to perform the tritium exposures. Although comprehensive understanding of isotope-induced fracture in GTS reservoir materials is a several year effort, the FY15 work would enabled us to jump-start the tests and initiate long-term tritium exposures to aid comprehensive future investigations. Development of a procedure and laboratory testing consistency between SNL-CA and SNRL ensures reliability in results as future evaluations are performed on aluminum alloys and potentially additively-manufactured components.« less
Heatley, Emer M; Harris, Melanie; Battersby, Malcolm; McEvoy, R Doug; Chai-Coetzer, Ching Li; Antic, Nicholas A
2013-10-01
Obstructive sleep apnoea (OSA) is a common disorder that has all the characteristics of a chronic condition. As with other chronic conditions, OSA requires ongoing management of treatments and problems, such as residual symptoms, deficits and co-morbidities. Also, many OSA patients have modifiable lifestyle factors that contribute to their disease, which could be improved with intervention. As health systems are in the process of developing more comprehensive chronic care structures and supports, tools such as chronic condition management programs are available to enable OSA patients and their health care providers to further engage and collaborate in health management. This review explains why the OSA patient group requires a more comprehensive approach to disease management, describes the chronic care model as a platform for management of chronic conditions, and assesses the suitability of particular chronic disease management programs in relation to the needs of the OSA population. Implementation of an evidence-based health-professional-led chronic condition management program into OSA patient care is likely to provide a context in which health risks are properly acknowledged and addressed. Such programs present an important opportunity to enable more optimal health outcomes than is possible by device-focused management alone. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adrenocortical carcinoma: the dawn of a new era of genomic and molecular biology analysis.
Armignacco, R; Cantini, G; Canu, L; Poli, G; Ercolino, T; Mannelli, M; Luconi, M
2018-05-01
Over the last decade, the development of novel and high penetrance genomic approaches to analyze biological samples has provided very new insights in the comprehension of the molecular biology and genetics of tumors. The use of these techniques, consisting of exome sequencing, transcriptome, miRNome, chromosome alteration, genome, and epigenome analysis, has also been successfully applied to adrenocortical carcinoma (ACC). In fact, the analysis of large cohorts of patients allowed the stratification of ACC with different patterns of molecular alterations, associated with different outcomes, thus providing a novel molecular classification of the malignancy to be associated with the classical pathological analysis. Improving our knowledge about ACC molecular features will result not only in a better diagnostic and prognostic accuracy, but also in the identification of more specific therapeutic targets for the development of more effective pharmacological anti-cancer approaches. In particular, the specific molecular alteration profiles identified in ACC may represent targetable events by the use of already developed or newly designed drugs enabling a better and more efficacious management of the ACC patient in the context of new frontiers of personalized precision medicine.
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
Fostering Creativity in Tablet-Based Interactive Classrooms
ERIC Educational Resources Information Center
Kim, Hye Jeong; Park, Ji Hyeon; Yoo, Sungae; Kim, Hyeoncheol
2016-01-01
This article aims to examine the effects of an instructional model that leverages innovative technologies in the classroom to cultivate collaboration that improves students' comprehension, fosters their creativity, and enables them to better express and communicate their ideas through drawing. This discussion focuses on classroom interaction…
Hadjithomas, Michalis; Chen, I-Min Amy; Chu, Ken; ...
2015-07-14
In the discovery of secondary metabolites, analysis of sequence data is a promising exploration path that remains largely underutilized due to the lack of computational platforms that enable such a systematic approach on a large scale. In this work, we present IMG-ABC (https://img.jgi.doe.gov/abc), an atlas of biosynthetic gene clusters within the Integrated Microbial Genomes (IMG) system, which is aimed at harnessing the power of “big” genomic data for discovering small molecules. IMG-ABC relies on IMG’s comprehensive integrated structural and functional genomic data for the analysis of biosynthetic gene clusters (BCs) and associated secondary metabolites (SMs). SMs and BCs serve asmore » the two main classes of objects in IMG-ABC, each with a rich collection of attributes. A unique feature of IMG-ABC is the incorporation of both experimentally validated and computationally predicted BCs in genomes as well as metagenomes, thus identifying BCs in uncultured populations and rare taxa. We demonstrate the strength of IMG-ABC’s focused integrated analysis tools in enabling the exploration of microbial secondary metabolism on a global scale, through the discovery of phenazine-producing clusters for the first time in lphaproteobacteria. IMG-ABC strives to fill the long-existent void of resources for computational exploration of the secondary metabolism universe; its underlying scalable framework enables traversal of uncovered phylogenetic and chemical structure space, serving as a doorway to a new era in the discovery of novel molecules. IMG-ABC is the largest publicly available database of predicted and experimental biosynthetic gene clusters and the secondary metabolites they produce. The system also includes powerful search and analysis tools that are integrated with IMG’s extensive genomic/metagenomic data and analysis tool kits. As new research on biosynthetic gene clusters and secondary metabolites is published and more genomes are sequenced, IMG-ABC will continue to expand, with the goal of becoming an essential component of any bioinformatic exploration of the secondary metabolism world.« less
Campbell, Jared M; Umapathysivam, Kandiah; Xue, Yifan; Lockwood, Craig
2015-12-01
Clinicians and other healthcare professionals need access to summaries of evidence-based information in order to provide effective care to their patients at the point-of-care. Evidence-based practice (EBP) point-of-care resources have been developed and are available online to meet this need. This study aimed to develop a comprehensive list of available EBP point-of-care resources and evaluate their processes and policies for the development of content, in order to provide a critical analysis based upon rigor, transparency and measures of editorial quality to inform healthcare providers and promote quality improvement amongst publishers of EBP resources. A comprehensive and systematic search (Pubmed, CINAHL, and Cochrane Central) was undertaken to identify available EBP point-of-care resources, defined as "web-based medical compendia specifically designed to deliver predigested, rapidly accessible, comprehensive, periodically updated, and evidence-based information (and possibly also guidance) to clinicians." A pair of investigators independently extracted information on general characteristics, content presentation, editorial quality, evidence-based methodology, and breadth and volume. Twenty-seven summary resources were identified, of which 22 met the predefined inclusion criteria for EBP point-of-care resources, and 20 could be accessed for description and assessment. Overall, the upper quartile of EBP point-of-care providers was assessed to be UpToDate, Nursing Reference Centre, Mosby's Nursing Consult, BMJ Best Practice, and JBI COnNECT+. The choice of which EBP point-of-care resources are suitable for an organization is a decision that depends heavily on the unique requirements of that organization and the resources it has available. However, the results presented in this study should enable healthcare providers to make that assessment in a clear, evidence-based manner, and provide a comprehensive list of the available options. © 2015 Sigma Theta Tau International.
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
The Role of Genome Accessibility in Transcription Factor Binding in Bacteria.
Gomes, Antonio L C; Wang, Harris H
2016-04-01
ChIP-seq enables genome-scale identification of regulatory regions that govern gene expression. However, the biological insights generated from ChIP-seq analysis have been limited to predictions of binding sites and cooperative interactions. Furthermore, ChIP-seq data often poorly correlate with in vitro measurements or predicted motifs, highlighting that binding affinity alone is insufficient to explain transcription factor (TF)-binding in vivo. One possibility is that binding sites are not equally accessible across the genome. A more comprehensive biophysical representation of TF-binding is required to improve our ability to understand, predict, and alter gene expression. Here, we show that genome accessibility is a key parameter that impacts TF-binding in bacteria. We developed a thermodynamic model that parameterizes ChIP-seq coverage in terms of genome accessibility and binding affinity. The role of genome accessibility is validated using a large-scale ChIP-seq dataset of the M. tuberculosis regulatory network. We find that accounting for genome accessibility led to a model that explains 63% of the ChIP-seq profile variance, while a model based in motif score alone explains only 35% of the variance. Moreover, our framework enables de novo ChIP-seq peak prediction and is useful for inferring TF-binding peaks in new experimental conditions by reducing the need for additional experiments. We observe that the genome is more accessible in intergenic regions, and that increased accessibility is positively correlated with gene expression and anti-correlated with distance to the origin of replication. Our biophysically motivated model provides a more comprehensive description of TF-binding in vivo from first principles towards a better representation of gene regulation in silico, with promising applications in systems biology.
NASA Astrophysics Data System (ADS)
Khaire, Vikram
2017-08-01
There exists a large void in our understanding of the intergalactic medium (IGM) at z=0.5-1.5, spanning a significant cosmic time of 4 Gyr. This hole resulted from a paucity of near-UV QSO spectra, which were historically very expensive to obtain. However, with the advent of COS and the HST UV initiative, sufficient STIS/COS NUV spectra have finally become available, enabling the first statistical analyses. We propose a comprehensive study of the z 1 IGM using the Ly-alpha forest of 26 archival QSO spectra. This analysis will: (1) measure the distribution of HI absorbers to several percent precision down to log NHI < 13 to test our model of the IGM, and determine the extragalactic UV background (UVB) at that epoch; (2) measure the Ly-alpha forest power spectrum to 12%, providing another precision test of LCDM and our theory of the IGM; (3) measure the thermal state of the IGM, which reflects the balance of heating (photoheating, HI/HeII reionization) and cooling (Hubble expansion) of cosmic baryons, and directly verify the predicted cooldown of IGM gas after reionization for the first time; (4) generate high-quality reductions, coadds, and continuum fits that will be released to the public to enable other science cases. These results, along with our state-of-the-art hydrodynamical simulations, and theoretical models of the UVB, will fill the 4 Gyr hole in our understanding of the IGM. When combined with existing HST and ground-based data from lower and higher z, they will lead to a complete, empirical description of the IGM from HI reionization to the present, spanning more than 10 Gyr of cosmic history, adding substantially to Hubble's legacy of discovery on the IGM.
Western, Max J.; Peacock, Oliver J.; Stathi, Afroditi; Thompson, Dylan
2015-01-01
Background Innovative physical activity monitoring technology can be used to depict rich visual feedback that encompasses the various aspects of physical activity known to be important for health. However, it is unknown whether patients who are at risk of chronic disease would understand such sophisticated personalised feedback or whether they would find it useful and motivating. The purpose of the present study was to determine whether technology-enabled multidimensional physical activity graphics and visualisations are comprehensible and usable for patients at risk of chronic disease. Method We developed several iterations of graphics depicting minute-by-minute activity patterns and integrated physical activity health targets. Subsequently, patients at moderate/high risk of chronic disease (n=29) and healthcare practitioners (n=15) from South West England underwent full 7-days activity monitoring followed by individual semi-structured interviews in which they were asked to comment on their own personalised visual feedback Framework analysis was used to gauge their interpretation and of personalised feedback, graphics and visualisations. Results We identified two main components focussing on (a) the interpretation of feedback designs and data and (b) the impact of personalised visual physical activity feedback on facilitation of health behaviour change. Participants demonstrated a clear ability to understand the sophisticated personal information plus an enhanced physical activity knowledge. They reported that receiving multidimensional feedback was motivating and could be usefully applied to facilitate their efforts in becoming more physically active. Conclusion Multidimensional physical activity feedback can be made comprehensible, informative and motivational by using appropriate graphics and visualisations. There is an opportunity to exploit the full potential created by technological innovation and provide sophisticated personalised physical activity feedback as an adjunct to support behaviour change. PMID:25938455
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedford, Nicholas M.; Hughes, Zak E.; Tang, Zhenghua
Peptide-enabled nanoparticle (NP) synthesis routes can create and/or assemble functional nanomaterials under environmentally friendly conditions, with properties dictated by complex interactions at the biotic/abiotic interface. Manipulation of this interface through sequence modification can provide the capability for material properties to be tailored to create enhanced materials for energy, catalysis, and sensing applications. Fully realizing the potential of these materials requires a comprehensive understanding of sequence-dependent structure/function relationships that is presently lacking. In this work, the atomic-scale structures of a series of peptide-capped Au NPs are determined using a combination of atomic pair distribution function analysis of high-energy X-ray diffraction datamore » and advanced molecular dynamics (MD) simulations. The Au NPs produced with different peptide sequences exhibit varying degrees of catalytic activity for the exemplar reaction 4-nitrophenol reduction. The experimentally derived atomic-scale NP configurations reveal sequence-dependent differences in structural order at the NP surface. Replica exchange with solute-tempering MD simulations are then used to predict the morphology of the peptide overlayer on these Au NPs and identify factors determining the structure/catalytic properties relationship. We show that the amount of exposed Au surface, the underlying surface structural disorder, and the interaction strength of the peptide with the Au surface all influence catalytic performance. A simplified computational prediction of catalytic performance is developed that can potentially serve as a screening tool for future studies. Our approach provides a platform for broadening the analysis of catalytic peptide-enabled metallic NP systems, potentially allowing for the development of rational design rules for property enhancement.« less
Costello, Michelle; Taylor, Jane; O'Hara, Lily
2015-01-01
A comprehensive primary health care approach is required to address complex health issues and reduce inequities. However, there has been limited uptake of this approach by health services nationally or internationally. Reorienting health services towards becoming more health promoting provides a mechanism to support the delivery of comprehensive primary health care. The aim of this study was to determine the impact of a health promotion-focused organisational development strategy on the capacity of a primary health care service to deliver comprehensive primary health care. A questionnaire and semistructured individual interviews were used to collect quantitative and qualitative impact evaluation data, respectively, from 13 health service staff across three time points with regard to 37 indicators of organisational capacity. There were significant increases in mean scores for 31 indicators, with effect sizes ranging from moderate to nearly perfect. A range of key enablers and barriers to support the delivery of comprehensive primary health care was identified. In conclusion, an organisational development strategy to reorient health services towards becoming more health promoting may increase the capacity to deliver comprehensive primary health care.
NASA Astrophysics Data System (ADS)
Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat @
2014-02-01
Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.
Enterohepatic helicobacter in ulcerative colitis: potential pathogenic entities?
Thomson, John M; Hansen, Richard; Berry, Susan H; Hope, Mairi E; Murray, Graeme I; Mukhopadhya, Indrani; McLean, Mairi H; Shen, Zeli; Fox, James G; El-Omar, Emad; Hold, Georgina L
2011-02-23
Changes in bacterial populations termed "dysbiosis" are thought central to ulcerative colitis (UC) pathogenesis. In particular, the possibility that novel Helicobacter organisms play a role in human UC has been debated but not comprehensively investigated. The aim of this study was to develop a molecular approach to investigate the presence of Helicobacter organisms in adults with and without UC. A dual molecular approach to detect Helicobacter was developed. Oligonucleotide probes against the genus Helicobacter were designed and optimised alongside a validation of published H. pylori probes. A comprehensive evaluation of Helicobacter genus and H. pylori PCR primers was also undertaken. The combined approach was then assessed in a range of gastrointestinal samples prior to assessment of a UC cohort. Archival colonic samples were available from 106 individuals for FISH analysis (57 with UC and 49 non-IBD controls). A further 118 individuals were collected prospectively for dual FISH and PCR analysis (86 UC and 32 non-IBD controls). An additional 27 non-IBD controls were available for PCR analysis. All Helicobacter PCR-positive samples were sequenced. The association between Helicobacter and each study group was statistically analysed using the Pearson Chi Squared 2 tailed test. Helicobacter genus PCR positivity was significantly higher in UC than controls (32 of 77 versus 11 of 59, p = 0.004). Sequence analysis indicated enterohepatic Helicobacter species prevalence was significantly higher in the UC group compared to the control group (30 of 77 versus 2 of 59, p<0.0001). PCR and FISH results were concordant in 74 (67.9%) of subjects. The majority of discordant results were attributable to a higher positivity rate with FISH than PCR. Helicobacter organisms warrant consideration as potential pathogenic entities in UC. Isolation of these organisms from colonic tissue is needed to enable interrogation of pathogenicity against established criteria.
Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying
NASA Technical Reports Server (NTRS)
Calhoun, Philip; Novo-Gradac, Anne-Marie; Shah, Neerav
2017-01-01
Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m-500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as microthruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.
Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Novo-Gradac, Anne-Marie; Shah, Neerav
2017-01-01
Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m 500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as micro-thruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.
Comparative genetic screens in human cells reveal new regulatory mechanisms in WNT signaling
Lebensohn, Andres M; Dubey, Ramin; Neitzel, Leif R; Tacchelly-Benites, Ofelia; Yang, Eungi; Marceau, Caleb D; Davis, Eric M; Patel, Bhaven B; Bahrami-Nejad, Zahra; Travaglini, Kyle J; Ahmed, Yashi; Lee, Ethan; Carette, Jan E; Rohatgi, Rajat
2016-01-01
The comprehensive understanding of cellular signaling pathways remains a challenge due to multiple layers of regulation that may become evident only when the pathway is probed at different levels or critical nodes are eliminated. To discover regulatory mechanisms in canonical WNT signaling, we conducted a systematic forward genetic analysis through reporter-based screens in haploid human cells. Comparison of screens for negative, attenuating and positive regulators of WNT signaling, mediators of R-spondin-dependent signaling and suppressors of constitutive signaling induced by loss of the tumor suppressor adenomatous polyposis coli or casein kinase 1α uncovered new regulatory features at most levels of the pathway. These include a requirement for the transcription factor AP-4, a role for the DAX domain of AXIN2 in controlling β-catenin transcriptional activity, a contribution of glycophosphatidylinositol anchor biosynthesis and glypicans to R-spondin-potentiated WNT signaling, and two different mechanisms that regulate signaling when distinct components of the β-catenin destruction complex are lost. The conceptual and methodological framework we describe should enable the comprehensive understanding of other signaling systems. DOI: http://dx.doi.org/10.7554/eLife.21459.001 PMID:27996937
Community Coordinated Modeling Center Support of Science Needs for Integrated Data Environment
NASA Technical Reports Server (NTRS)
Kuznetsova, M. M.; Hesse, M.; Rastatter, L.; Maddox, M.
2007-01-01
Space science models are essential component of integrated data environment. Space science models are indispensable tools to facilitate effective use of wide variety of distributed scientific sources and to place multi-point local measurements into global context. The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the- art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. The majority of models residing at CCMC are comprehensive computationally intensive physics-based models. To allow the models to be driven by data relevant to particular events, the CCMC developed an online data file generation tool that automatically downloads data from data providers and transforms them to required format. CCMC provides a tailored web-based visualization interface for the model output, as well as the capability to download simulations output in portable standard format with comprehensive metadata and user-friendly model output analysis library of routines that can be called from any C supporting language. CCMC is developing data interpolation tools that enable to present model output in the same format as observations. CCMC invite community comments and suggestions to better address science needs for the integrated data environment.
PoMaMo--a comprehensive database for potato genome data.
Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane
2005-01-01
A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes.
PoMaMo—a comprehensive database for potato genome data
Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane
2005-01-01
A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes. PMID:15608284
A method for evaluating discoverability and navigability of recommendation algorithms.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis
2017-01-01
Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.
RFA Guardian: Comprehensive Simulation of Radiofrequency Ablation Treatment of Liver Tumors.
Voglreiter, Philip; Mariappan, Panchatcharam; Pollari, Mika; Flanagan, Ronan; Blanco Sequeiros, Roberto; Portugaller, Rupert Horst; Fütterer, Jurgen; Schmalstieg, Dieter; Kolesnik, Marina; Moche, Michael
2018-01-15
The RFA Guardian is a comprehensive application for high-performance patient-specific simulation of radiofrequency ablation of liver tumors. We address a wide range of usage scenarios. These include pre-interventional planning, sampling of the parameter space for uncertainty estimation, treatment evaluation and, in the worst case, failure analysis. The RFA Guardian is the first of its kind that exhibits sufficient performance for simulating treatment outcomes during the intervention. We achieve this by combining a large number of high-performance image processing, biomechanical simulation and visualization techniques into a generalized technical workflow. Further, we wrap the feature set into a single, integrated application, which exploits all available resources of standard consumer hardware, including massively parallel computing on graphics processing units. This allows us to predict or reproduce treatment outcomes on a single personal computer with high computational performance and high accuracy. The resulting low demand for infrastructure enables easy and cost-efficient integration into the clinical routine. We present a number of evaluation cases from the clinical practice where users performed the whole technical workflow from patient-specific modeling to final validation and highlight the opportunities arising from our fast, accurate prediction techniques.
Role of Knowledge Management in Development and Lifecycle Management of Biopharmaceuticals.
Rathore, Anurag S; Garcia-Aponte, Oscar Fabián; Golabgir, Aydin; Vallejo-Diaz, Bibiana Margarita; Herwig, Christoph
2017-02-01
Knowledge Management (KM) is a key enabler for achieving quality in a lifecycle approach for production of biopharmaceuticals. Due to the important role that it plays towards successful implementation of Quality by Design (QbD), an analysis of KM solutions is needed. This work provides a comprehensive review of the interface between KM and QbD-driven biopharmaceutical production systems as perceived by academic as well as industrial viewpoints. A comprehensive set of 356 publications addressing the applications of KM tools to QbD-related tasks were screened and a query to gather industrial inputs from 17 major biopharmaceutical organizations was performed. Three KM tool classes were identified as having high relevance for biopharmaceutical production systems and have been further explored: knowledge indicators, ontologies, and process modeling. A proposed categorization of 16 distinct KM tool classes allowed for the identification of holistic technologies supporting QbD. In addition, the classification allowed for addressing the disparity between industrial and academic expectations regarding the application of KM methodologies. This is a first of a kind attempt and thus we think that this paper would be of considerable interest to those in academia and industry that are engaged in accelerating development and commercialization of biopharmaceuticals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SacconePhD, Scott F; Chesler, Elissa J; Bierut, Laura J
Commercial SNP microarrays now provide comprehensive and affordable coverage of the human genome. However, some diseases have biologically relevant genomic regions that may require additional coverage. Addiction, for example, is thought to be influenced by complex interactions among many relevant genes and pathways. We have assembled a list of 486 biologically relevant genes nominated by a panel of experts on addiction. We then added 424 genes that showed evidence of association with addiction phenotypes through mouse QTL mappings and gene co-expression analysis. We demonstrate that there are a substantial number of SNPs in these genes that are not well representedmore » by commercial SNP platforms. We address this problem by introducing a publicly available SNP database for addiction. The database is annotated using numeric prioritization scores indicating the extent of biological relevance. The scores incorporate a number of factors such as SNP/gene functional properties (including synonymy and promoter regions), data from mouse systems genetics and measures of human/mouse evolutionary conservation. We then used HapMap genotyping data to determine if a SNP is tagged by a commercial microarray through linkage disequilibrium. This combination of biological prioritization scores and LD tagging annotation will enable addiction researchers to supplement commercial SNP microarrays to ensure comprehensive coverage of biologically relevant regions.« less
RFID Tag as a Sensor - A Review on the Innovative Designs and Applications
NASA Astrophysics Data System (ADS)
Meng, Zhaozong; Li, Zhen
2016-12-01
The Radio Frequency Identification (RFID) technology has gained interests in both academia and industry since its invention. In addition to the applications in access control and supply chain, RFID is also a cost-efficient solution for Non-Destructive Testing (NDT) and pervasive monitoring. The battery free RFID tags are used as independent electromagnetic sensors or energy harvesting and data transmission interface of sensor modules for different measurement purposes. This review paper aims to provide a comprehensive overview of the innovative designs and applications of RFID sensor technology with new insights, identify the technical challenges, and outline the future perspectives. With a brief introduction to the fundamentals of RFID measurement, the enabling technologies and recent technical progress are illustrated, followed by an extensive discussion of the novel designs and applications. Then, based on an in-depth analysis, the potential constraints are identified and the envisaged future directions are suggested, including printable/wearable RFID, System-on-Chip (SoC), ultra-low power, etc. The comprehensive discussion of RFID sensor technology will be inspirational and useful for academic and industrial communities in investigating, developing, and applying RFID for various measurement applications.
Zhao, Qiang; Lv, Qin; Wang, Hailin
2015-08-15
We previously reported a fluorescence anisotropy (FA) approach for small molecules using tetramethylrhodamine (TMR) labeled aptamer. It relies on target-binding induced change of intramolecular interaction between TMR and guanine (G) base. TMR-labeling sites are crucial for this approach. Only terminal ends and thymine (T) bases could be tested for TMR labeling in our previous work, possibly causing limitation in analysis of different targets with this FA strategy. Here, taking the analysis of adenosine triphosphate (ATP) as an example, we demonstrated a success of conjugating TMR on other bases of aptamer adenine (A) or cytosine (C) bases and an achievement of full mapping various labeling sites of aptamers. We successfully constructed aptamer fluorescence anisotropy (FA) sensors for adenosine triphosphate (ATP). We conjugated single TMR on adenine (A), cytosine (C), or thymine (T) bases or terminals of a 25-mer aptamer against ATP and tested FA responses of 14 TMR-labeled aptamer to ATP. The aptamers having TMR labeled on the 16th base C or 23rd base A were screened out and exhibited significant FA-decreasing or FA-increasing responses upon ATP, respectively. These two favorable TMR-labeled aptamers enabled direct FA sensing ATP with a detection limit of 1 µM and the analysis of ATP in diluted serum. The comprehensive screening various TMR labeling sites of aptamers facilitates the successful construction of FA sensors using TMR-labeled aptamers. It will expand application of TMR-G interaction based aptamer FA strategy to a variety of targets. Copyright © 2015 Elsevier B.V. All rights reserved.
Q-nexus: a comprehensive and efficient analysis pipeline designed for ChIP-nexus.
Hansen, Peter; Hecht, Jochen; Ibn-Salem, Jonas; Menkuec, Benjamin S; Roskosch, Sebastian; Truss, Matthias; Robinson, Peter N
2016-11-04
ChIP-nexus, an extension of the ChIP-exo protocol, can be used to map the borders of protein-bound DNA sequences at nucleotide resolution, requires less input DNA and enables selective PCR duplicate removal using random barcodes. However, the use of random barcodes requires additional preprocessing of the mapping data, which complicates the computational analysis. To date, only a very limited number of software packages are available for the analysis of ChIP-exo data, which have not yet been systematically tested and compared on ChIP-nexus data. Here, we present a comprehensive software package for ChIP-nexus data that exploits the random barcodes for selective removal of PCR duplicates and for quality control. Furthermore, we developed bespoke methods to estimate the width of the protected region resulting from protein-DNA binding and to infer binding positions from ChIP-nexus data. Finally, we applied our peak calling method as well as the two other methods MACE and MACS2 to the available ChIP-nexus data. The Q-nexus software is efficient and easy to use. Novel statistics about duplication rates in consideration of random barcodes are calculated. Our method for the estimation of the width of the protected region yields unbiased signatures that are highly reproducible for biological replicates and at the same time very specific for the respective factors analyzed. As judged by the irreproducible discovery rate (IDR), our peak calling algorithm shows a substantially better reproducibility. An implementation of Q-nexus is available at http://charite.github.io/Q/ .
The product is the presentation - a draft set of slides is attached. The presentation is organized around a comprehensive framework for nanomaterial evaluation published last year (Boyes et al., 2017). The framework considers nanomaterial enabled products across their life span...
Watershed Influences on Nearshore Waters Across the Entire US Great Lakes Coastal Region
We have combined three elements of observation to enable a comprehensive characterization of the Great Lakes nearshore that links nearshore conditions with their adjacent coastal watersheds. The three elements are: 1) a shore-parallel, high-resolution survey of the nearshore usin...
group depend on the project phase and the maturity of the NLC design. Currently the NLC project is in design approaches that will enable cost estimates, schedules, risk assessment and risk reduction availability are utilized in generating and selecting among design alternatives. A more comprehensive version
Differentiating Instruction for Disabled Students in Inclusive Classrooms
ERIC Educational Resources Information Center
Broderick, Alicia; Mehta-Parekh, Heeral; Reid, D. Kim
2005-01-01
Differentiating instruction, a comprehensive approach to teaching, enables the successful inclusion of all students, including the disabled, in general-education classrooms. As inclusive educators, we argue that disability is an enacted, interactional process and not an empirical, stable fact or condition. We recommend planning responsive lessons…
Planning for School Emergencies.
ERIC Educational Resources Information Center
Della-Giustina, Daniel E.
This document is designed to provide civil leaders and school administrators with a resource that will enable them to develop comprehensive contingency plans for specific emergency situations. A discussion of disaster and emergency management planning includes an outline of the objectives of emergency planning that were established for this guide.…
Learning from Animation Enabled by Collaboration
ERIC Educational Resources Information Center
Rebetez, Cyril; Betrancourt, Mireille; Sangin, Mirweis; Dillenbourg, Pierre
2010-01-01
Animated graphics are extensively used in multimedia instructions explaining how natural or artificial dynamic systems work. As animation directly depicts spatial changes over time, it is legitimate to believe that animated graphics will improve comprehension over static graphics. However, the research failed to find clear evidence in favour of…
Administrative Uses of Computers in the Schools.
ERIC Educational Resources Information Center
Bluhm, Harry P.
This book, intended for school administrators, provides a comprehensive account of how computer information systems can enable administrators at both middle and top management levels to manage the educational enterprise. It can be used as a textbook in an educational administration course emphasizing computer technology in education, an…
Cartoon Violence: Is It as Detrimental to Preschoolers as We Think?
ERIC Educational Resources Information Center
Peters, Kristen M.; Blumberg, Fran C.
2002-01-01
Critically reviews research on effects of cartoon violence on children's moral understanding and behavior to enable early childhood educators and parents to make informed decisions about what constitutes potentially harmful television viewing. Focuses on preschoolers' limited comprehension of television content and relatively sophisticated moral…
Li, Zhucui; Lu, Yan; Guo, Yufeng; Cao, Haijie; Wang, Qinhong; Shui, Wenqing
2018-10-31
Data analysis represents a key challenge for untargeted metabolomics studies and it commonly requires extensive processing of more than thousands of metabolite peaks included in raw high-resolution MS data. Although a number of software packages have been developed to facilitate untargeted data processing, they have not been comprehensively scrutinized in the capability of feature detection, quantification and marker selection using a well-defined benchmark sample set. In this study, we acquired a benchmark dataset from standard mixtures consisting of 1100 compounds with specified concentration ratios including 130 compounds with significant variation of concentrations. Five software evaluated here (MS-Dial, MZmine 2, XCMS, MarkerView, and Compound Discoverer) showed similar performance in detection of true features derived from compounds in the mixtures. However, significant differences between untargeted metabolomics software were observed in relative quantification of true features in the benchmark dataset. MZmine 2 outperformed the other software in terms of quantification accuracy and it reported the most true discriminating markers together with the fewest false markers. Furthermore, we assessed selection of discriminating markers by different software using both the benchmark dataset and a real-case metabolomics dataset to propose combined usage of two software for increasing confidence of biomarker identification. Our findings from comprehensive evaluation of untargeted metabolomics software would help guide future improvements of these widely used bioinformatics tools and enable users to properly interpret their metabolomics results. Copyright © 2018 Elsevier B.V. All rights reserved.
The UEA sRNA Workbench (version 4.4): a comprehensive suite of tools for analyzing miRNAs and sRNAs.
Stocks, Matthew B; Mohorianu, Irina; Beckers, Matthew; Paicu, Claudia; Moxon, Simon; Thody, Joshua; Dalmay, Tamas; Moulton, Vincent
2018-05-02
RNA interference, a highly conserved regulatory mechanism, is mediated via small RNAs. Recent technical advances enabled the analysis of larger, complex datasets and the investigation of microRNAs and the less known small interfering RNAs. However, the size and intricacy of current data requires a comprehensive set of tools, able to discriminate the patterns from the low-level, noise-like, variation; numerous and varied suggestions from the community represent an invaluable source of ideas for future tools, the ability of the community to contribute to this software is essential. We present a new version of the UEA sRNA Workbench, reconfigured to allow an easy insertion of new tools/workflows. In its released form, it comprises of a suite of tools in a user-friendly environment, with enhanced capabilities for a comprehensive processing of sRNA-seq data e.g. tools for an accurate prediction of sRNA loci (CoLIde) and miRNA loci (miRCat2), as well as workflows to guide the users through common steps such as quality checking of the input data, normalization of abundances or detection of differential expression represent the first step in sRNA-seq analyses. The UEA sRNA Workbench is available at: http://srna-workbench.cmp.uea.ac.uk The source code is available at: https://github.com/sRNAworkbenchuea/UEA_sRNA_Workbench. v.moulton@uea.ac.uk.
Spencer, Mercedes; Wagner, Richard K
2018-06-01
The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language (vocabulary, listening comprehension, storytelling ability, and semantic and syntactic knowledge). Results indicated that children with SCD had deficits in oral language ( d = -0.78, 95% CI [-0.89, -0.68], but these deficits were not as severe as their deficit in reading comprehension ( d = -2.78, 95% CI [-3.01, -2.54]). When compared to reading comprehension age-matched normal readers, the oral language skills of the two groups were comparable ( d = 0.32, 95% CI [-0.49, 1.14]), which suggests that the oral language weaknesses of children with SCD represent a developmental delay rather than developmental deviance. Theoretical and practical implications of these findings are discussed.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Cavity-Enhanced Raman Spectroscopy for Food Chain Management
Sandfort, Vincenz; Goldschmidt, Jens; Wöllenstein, Jürgen
2018-01-01
Comprehensive food chain management requires the monitoring of many parameters including temperature, humidity, and multiple gases. The latter is highly challenging because no low-cost technology for the simultaneous chemical analysis of multiple gaseous components currently exists. This contribution proposes the use of cavity enhanced Raman spectroscopy to enable online monitoring of all relevant components using a single laser source. A laboratory scale setup is presented and characterized in detail. Power enhancement of the pump light is achieved in an optical resonator with a Finesse exceeding 2500. A simulation for the light scattering behavior shows the influence of polarization on the spatial distribution of the Raman scattered light. The setup is also used to measure three relevant showcase gases to demonstrate the feasibility of the approach, including carbon dioxide, oxygen and ethene. PMID:29495501
NASA Technical Reports Server (NTRS)
1990-01-01
Since the Final Environmental Impact Statement (FEIS) and Record of Decision on the FEIS describing the potential impacts to human health and the environment associated with the program, three factors have caused NASA to initiate additional studies regarding these issues. These factors are: (1) The U.S. Army Corps of Engineers and the Environmental Protection Agency (EPA) agreed to use the same comprehensive procedures to identify and delineate wetlands; (2) EPA has given NASA further guidance on how best to simulate the exhaust plume from the Advanced Solid Rocket Motor (ASRM) testing through computer modeling, enabling more realistic analysis of emission impacts; and (3) public concerns have been raised concerning short and long term impacts on human health and the environment from ASRM testing.
Aszyk, Justyna; Kot-Wasik, Agata
Non-targeted screening of drugs present in herbal products, known as "legal high" drugs and in hair as a biological matrix commonly used in toxicological investigations was accomplished with the use of high pressure liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (HPLC-Q-TOF-MS). In total, 25 and 14 therapeutical drugs and psychoactive substances/metabolites were detected in investigated hair samples and herbal products, respectively. We demonstrate that the HPLC-Q-TOF methodology seems to be a powerful tool in the qualitative analysis applied in identification of these designer drugs, thus enabling a laboratory to stay-up-to-date with the drugs that are being sold as legal high products on black market.
Olokundun, Maxwell; Iyiola, Oluwole; Ibidunni, Stephen; Ogbari, Mercy; Falola, Hezekiah; Salau, Odunayo; Peter, Fred; Borishade, Taiye
2018-06-01
The article presented data on the effectiveness of entrepreneurship curriculum contents on university students' entrepreneurial interest and knowledge. The study focused on the perceptions of Nigerian university students. Emphasis was laid on the first four universities in Nigeria to offer a degree programme in entrepreneurship. The study adopted quantitative approach with a descriptive research design to establish trends related to the objective of the study. Survey was be used as quantitative research method. The population of this study included all students in the selected universities. Data was analyzed with the use of Statistical Package for Social Sciences (SPSS). Mean score was used as statistical tool of analysis. The field data set is made widely accessible to enable critical or a more comprehensive investigation.
Navigating legal constraints in clinical data warehousing: a case study in personalized medicine.
Jefferys, Benjamin R; Nwankwo, Iheanyi; Neri, Elias; Chang, David C W; Shamardin, Lev; Hänold, Stefanie; Graf, Norbert; Forgó, Nikolaus; Coveney, Peter
2013-04-06
Personalized medicine relies in part upon comprehensive data on patient treatment and outcomes, both for analysis leading to improved models that provide the basis for enhanced treatment, and for direct use in clinical decision-making. A data warehouse is an information technology for combining and standardizing multiple databases. Data warehousing of clinical data is constrained by many legal and ethical considerations, owing to the sensitive nature of the data being stored. We describe an unconstrained clinical data warehousing architecture, some of the legal constraints that have led us to reconsider this architecture, and the legal and technical solutions to these constraints developed for the clinical data warehouse in the personalized medicine project p-medicine. We also propose some changes to the legal constraints that will further enable clinical research.
Naidoo, P; Liu, V J; Bergin, S
2015-01-01
Diabetic complications in the lower extremity are associated with significant morbidity and mortality, and impact heavily upon the public health system. Early and accurate recognition of these abnormalities is crucial, enabling the early initiation of treatments and thus avoiding or minimizing deformity, dysfunction and amputation. Following careful clinical assessment, radiological imaging is central to the diagnostic and follow-up process. We aim to provide a comprehensive review of diabetic lower limb complications designed to assist radiologists and to contribute to better outcomes for these patients. PMID:26111070
Implications of the Joint Comprehensive Plan of Action
NASA Astrophysics Data System (ADS)
Perkovich, George
2017-11-01
This essay describes the background behind the July 2015 Joint Comprehensive Plan of Action that was negotiated to redress the crisis that had developed around Iran's nuclear activities, and summarizes some of the agreement's key features. The essay then highlights political and strategic factors that enabled the diplomatic breakthrough, and draws lessons that could inform approaches to future proliferation challenges. The conclusion suggests how some of the agreement's innovative features could be built upon and applied more broadly to reduce risks that civilian nuclear energy programs could be diverted for military purposes and to inform approaches to nuclear disarmament in the future.
Comprehensiveness and humanization of nursing care management in the Intensive Care Unit.
Medeiros, Adriane Calvetti de; Siqueira, Hedi Crecencia Heckler de; Zamberlan, Claudia; Cecagno, Diana; Nunes, Simone Dos Santos; Thurow, Mara Regina Bergmann
2016-01-01
Identifying the elements that promote comprehensiveness and humanization of nursing care management in the Intensive Care Unit, with an ecosystemic approach. A documentary qualitative study. The method of documentary analysis was used for data analysis. Four pre-established categories were identified - Technical; Organizational; Technological; and Humanizing Dimensions. Data resulted in forming two sub-categories that integrate the humanizing dimension category, namely 'Comprehensiveness in healthcare actions' and 'Integrating processes and promoters of humanization,' bringing forth implications and challenges in forms of managing health work processes, enabling organizational, structural and managerial changes to the provided healthcare. It was considered that all structural elements in managing nursing care with a focus on the needs of users should be in line with public policies and the principles of comprehensiveness and humanization, thus possessing strong potential for transforming health practices. Identificar os elementos capazes de promover a integralidade e a humanização na gestão do cuidado de enfermagem na Unidade de Terapia Intensiva, com enfoque ecossistêmico. Pesquisa documental, de natureza qualitativa. Para a análise dos dados utilizou-se do método da análise documental. SForam identificadas quatro categorias preestabelecidas ‒ Dimensões: Técnica; Organizacional; Tecnológica e Humanizadora. Os dados resultantes das duas subcategorias que integraram a categoria Dimensão Humanizadora, Integralidade nas ações do cuidado e Processos integradores e promotores de humanização, trazem implicações e desafios nos modos de gerir os processos de trabalho em saúde, o que possibilita transformações organizacionais, estruturais e gerenciais na produção do cuidado. Considera-se que na gestão do cuidado de enfermagem todos os elementos estruturantes, com enfoque nas necessidades dos usuários, devem estar em consonância com as políticas públicas e os princípios da integralidade e da humanização e possuir forte potencial para a transformação das práticas em saúde.
Teaching Basic Reading Skills in Secondary Schools.
ERIC Educational Resources Information Center
Carnine, Linda
1980-01-01
This document presents diagnostic and prescriptive techniques that will enable teachers to enhance secondary school students' learning through reading in content areas. Three terms used in the document are defined in Section I: "vocabulary skills" include word attack skills, sight word skills, and word meanings; "comprehension skills" are literal,…
Monitoring Knowledge Base (MKB)
The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial process descriptions, and permitting techniques, including flexible permit development. Using MKB, one can gain a comprehensive understanding of emissions sources, control devices, and monitoring techniques, enabling one to determine appropriate permit terms and conditions.
Research in Education: Evidence-Based Inquiry, 7th Edition. MyEducationLab Series
ERIC Educational Resources Information Center
McMillan, James H.; Schumacher, Sally
2010-01-01
This substantially revised text provides a comprehensive, highly accessible, and student friendly introduction to the principles, concepts, and methods currently used in educational research. This text provides a balanced combination of quantitative and qualitative methods and enables students to master skills in reading, understanding,…
ERIC Educational Resources Information Center
Kanan, Linda M.
2010-01-01
Much has been written about the use of threat assessment. Schools are encouraged to have threat assessment teams and a threat assessment process as part of a comprehensive safe schools effort. Encouraging and enabling members of the school community to report possible threats in a timely manner is an essential component of an effective threat…
USDA-ARS?s Scientific Manuscript database
Advances in sequencing and genotyping technologies have enabled generation of several thousand markers including SSRs, SNPs, DArTs, hundreds of thousands transcript reads and BAC-end sequences in chickpea, pigeonpea and groundnut, three major legume crops of the semi-arid tropics. Comprehensive tran...
CASAS: An Effective Measurement System for Life Skills.
ERIC Educational Resources Information Center
Stiles, Richard L.; And Others
The California Adult Student Assessment System (CASAS) is a comprehensive educational system designed to enable adult educators to develop and evaluate a life skills curriculum for competency based educational programs. The system comprises the CASAS Competency List, the CASAS Item Bank, the User's Manual, the Curriculum Index and Matrix, and…
Standards Handbook. Version 4.0. What Works Clearinghouse™
ERIC Educational Resources Information Center
What Works Clearinghouse, 2017
2017-01-01
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Fitnessgram: Part 1--Critical Elements and Cues
ERIC Educational Resources Information Center
Masterson, Carolyn; Walkuski, Jeffrey J.
2004-01-01
The National Association for Sport and Physical Education's (NASPE) Physical Best program and the Cooper Institute for Aerobic Research's (CIAR) FITNESSGRAM have established a partnership to create a comprehensive health-related fitness education and assessment package. The goal of the Physical Best program is to enable students to acquire the…
Career Assessment and Planning Strategies for Postsecondary Students with Disabilities
ERIC Educational Resources Information Center
Roessler, Richard T.; Hennessey, Mary L.; Hogan, Ebony M.; Savickas, Suzanne
2009-01-01
Career assessment and planning services that enable students with disabilities to make successful transitions from higher education to careers are an important component often missing in the postsecondary educational experience. Comprehensive services in this regard involve students in considering how to incorporate their preferences, assets, and…
Leveraging ARRA Funding for Developing Comprehensive State Longitudinal Data Systems
ERIC Educational Resources Information Center
Pfeiffer, Jay; Klein, Steven; Levesque, Karen
2009-01-01
The American Recovery and Reinvestment Act (ARRA) provides several funding opportunities that can assist states in designing, developing, and implementing statewide education longitudinal data systems. These new and enhanced information systems will enable states to track student progress within and across the secondary and postsecondary education…
ERIC Educational Resources Information Center
Woito, Robert, Ed.
This kit presents a comprehensive introduction for students to arms control and disarmament issues. Included are copies of published and unpublished articles for each topic. Section I provides a self-survey to enable students to assess their own attitudes, values, and knowledge. The survey poses questions for which students select one of several…
Learning Resources for Community Education: Design Notes on Delivery Systems.
ERIC Educational Resources Information Center
Bhola, H. S.
A comprehensive and adaptable system of organizational arrangements is proposed in this document that will enable educational planners in Latin American countries to develop and deliver learning resources for community education and community action programs. A three-tier system of learning resources centers for community education is described.…
Building Staff Competencies and Selecting Communications Methods for Waste Management Programs.
ERIC Educational Resources Information Center
Richardson, John G.
The Waste Management Institute provided in-service training to interested County Extension agents in North Carolina to enable them to provide leadership in developing and delivering a comprehensive county-level waste management program. Training included technical, economic, environmental, social, and legal aspects of waste management presented in…
Examining the Effects of Classroom Discussion on Students' Comprehension of Text: A Meta-Analysis
ERIC Educational Resources Information Center
Murphy, P. Karen; Wilkinson, Ian A. G.; Soter, Anna O.; Hennessey, Maeghan N.; Alexander, John F.
2009-01-01
The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students' comprehension and learning. This comprehensive meta-analysis of…
ERIC Educational Resources Information Center
Lan, Yi-Chin; Lo, Yu-Ling; Hsu, Ying-Shao
2014-01-01
Comprehension is the essence of reading. Finding appropriate and effective reading strategies to support students' reading comprehension has always been a critical issue for educators. This article presents findings from a meta-analysis of 17 studies of metacognitive strategy instruction on students' reading comprehension in computerized…
Commiskey, Patricia; Afshinnik, Arash; Cothren, Elizabeth; Gropen, Toby; Iwuchukwu, Ifeanyi; Jennings, Bethany; McGrade, Harold C; Mora-Guillot, Julia; Sabharwal, Vivek; Vidal, Gabriel A; Zweifler, Richard M; Gaines, Kenneth
2017-04-01
United States (US) and worldwide telestroke programs frequently focus only on emergency room hyper-acute stroke management. This article describes a comprehensive, telemedicine-enabled, stroke care delivery system that combines "drip and ship" and "drip and keep" models with a comprehensive stroke center primary hub at Ochsner Medical Center in New Orleans, advanced stroke-capable regional hubs, and geographically-aligned, "stroke-ready" spokes. The primary hub provides vascular neurology expertise via telemedicine and monitors care for patients remaining at regional hubs and spokes using a multidisciplinary team approach. By 2014, primary hub telestroke consults grew to ≈1000/year with 16 min average door to consult initiation and 20 min to completion, and 29% of ischemic stroke patients received recombinant tissue-type plasminogen activator (rtPA), increasing 275%. Most patients remained in hospitals close to home, but neurointensive care and interventional procedures were common reasons for primary hub transfer. Given the time sensitivity and expert consultation needed for complex acute stroke care delivery paradigms, telestroke programs are effective for fulfilling unmet care needs. Combining drip and ship and drip and keep management allows more patients to stay "local," limiting primary hub transfer unless more advanced services are required. Post admission telestroke management at spokes increases personnel efficiency and can positively impact stroke outcomes.
Zhang, Bofei; Hu, Senyang; Baskin, Elizabeth; Patt, Andrew; Siddiqui, Jalal K.
2018-01-01
The value of metabolomics in translational research is undeniable, and metabolomics data are increasingly generated in large cohorts. The functional interpretation of disease-associated metabolites though is difficult, and the biological mechanisms that underlie cell type or disease-specific metabolomics profiles are oftentimes unknown. To help fully exploit metabolomics data and to aid in its interpretation, analysis of metabolomics data with other complementary omics data, including transcriptomics, is helpful. To facilitate such analyses at a pathway level, we have developed RaMP (Relational database of Metabolomics Pathways), which combines biological pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, WikiPathways, and the Human Metabolome DataBase (HMDB). To the best of our knowledge, an off-the-shelf, public database that maps genes and metabolites to biochemical/disease pathways and can readily be integrated into other existing software is currently lacking. For consistent and comprehensive analysis, RaMP enables batch and complex queries (e.g., list all metabolites involved in glycolysis and lung cancer), can readily be integrated into pathway analysis tools, and supports pathway overrepresentation analysis given a list of genes and/or metabolites of interest. For usability, we have developed a RaMP R package (https://github.com/Mathelab/RaMP-DB), including a user-friendly RShiny web application, that supports basic simple and batch queries, pathway overrepresentation analysis given a list of genes or metabolites of interest, and network visualization of gene-metabolite relationships. The package also includes the raw database file (mysql dump), thereby providing a stand-alone downloadable framework for public use and integration with other tools. In addition, the Python code needed to recreate the database on another system is also publicly available (https://github.com/Mathelab/RaMP-BackEnd). Updates for databases in RaMP will be checked multiple times a year and RaMP will be updated accordingly. PMID:29470400
Zhang, Bofei; Hu, Senyang; Baskin, Elizabeth; Patt, Andrew; Siddiqui, Jalal K; Mathé, Ewy A
2018-02-22
The value of metabolomics in translational research is undeniable, and metabolomics data are increasingly generated in large cohorts. The functional interpretation of disease-associated metabolites though is difficult, and the biological mechanisms that underlie cell type or disease-specific metabolomics profiles are oftentimes unknown. To help fully exploit metabolomics data and to aid in its interpretation, analysis of metabolomics data with other complementary omics data, including transcriptomics, is helpful. To facilitate such analyses at a pathway level, we have developed RaMP (Relational database of Metabolomics Pathways), which combines biological pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, WikiPathways, and the Human Metabolome DataBase (HMDB). To the best of our knowledge, an off-the-shelf, public database that maps genes and metabolites to biochemical/disease pathways and can readily be integrated into other existing software is currently lacking. For consistent and comprehensive analysis, RaMP enables batch and complex queries (e.g., list all metabolites involved in glycolysis and lung cancer), can readily be integrated into pathway analysis tools, and supports pathway overrepresentation analysis given a list of genes and/or metabolites of interest. For usability, we have developed a RaMP R package (https://github.com/Mathelab/RaMP-DB), including a user-friendly RShiny web application, that supports basic simple and batch queries, pathway overrepresentation analysis given a list of genes or metabolites of interest, and network visualization of gene-metabolite relationships. The package also includes the raw database file (mysql dump), thereby providing a stand-alone downloadable framework for public use and integration with other tools. In addition, the Python code needed to recreate the database on another system is also publicly available (https://github.com/Mathelab/RaMP-BackEnd). Updates for databases in RaMP will be checked multiple times a year and RaMP will be updated accordingly.
Yoshida, Yoko; Miyata, Toshiyuki; Matsumoto, Masanori; Shirotani-Ikejima, Hiroko; Uchida, Yumiko; Ohyama, Yoshifumi; Kokubo, Tetsuro; Fujimura, Yoshihiro
2015-01-01
For thrombotic microangiopathies (TMAs), the diagnosis of atypical hemolytic uremic syndrome (aHUS) is made by ruling out Shiga toxin-producing Escherichia coli (STEC)-associated HUS and ADAMTS13 activity-deficient thrombotic thrombocytopenic purpura (TTP), often using the exclusion criteria for secondary TMAs. Nowadays, assays for ADAMTS13 activity and evaluation for STEC infection can be performed within a few hours. However, a confident diagnosis of aHUS often requires comprehensive gene analysis of the alternative complement activation pathway, which usually takes at least several weeks. However, predisposing genetic abnormalities are only identified in approximately 70% of aHUS. To facilitate the diagnosis of complement-mediated aHUS, we describe a quantitative hemolytic assay using sheep red blood cells (RBCs) and human citrated plasma, spiked with or without a novel inhibitory anti-complement factor H (CFH) monoclonal antibody. Among 45 aHUS patients in Japan, 24% (11/45) had moderate-to-severe (≥50%) hemolysis, whereas the remaining 76% (34/45) patients had mild or no hemolysis (<50%). The former group is largely attributed to CFH-related abnormalities, and the latter group has C3-p.I1157T mutations (16/34), which were identified by restriction fragment length polymorphism (RFLP) analysis. Thus, a quantitative hemolytic assay coupled with RFLP analysis enabled the early diagnosis of complement-mediated aHUS in 60% (27/45) of patients in Japan within a week of presentation. We hypothesize that this novel quantitative hemolytic assay would be more useful in a Caucasian population, who may have a higher proportion of CFH mutations than Japanese patients. PMID:25951460
A Method for Identification and Analysis of Non-Overlapping Myeloid Immunophenotypes in Humans
Gustafson, Michael P.; Lin, Yi; Maas, Mary L.; Van Keulen, Virginia P.; Johnston, Patrick B.; Peikert, Tobias; Gastineau, Dennis A.; Dietz, Allan B.
2015-01-01
The development of flow cytometric biomarkers in human studies and clinical trials has been slowed by inconsistent sample processing, use of cell surface markers, and reporting of immunophenotypes. Additionally, the function(s) of distinct cell types as biomarkers cannot be accurately defined without the proper identification of homogeneous populations. As such, we developed a method for the identification and analysis of human leukocyte populations by the use of eight 10-color flow cytometric protocols in combination with novel software analyses. This method utilizes un-manipulated biological sample preparation that allows for the direct quantitation of leukocytes and non-overlapping immunophenotypes. We specifically designed myeloid protocols that enable us to define distinct phenotypes that include mature monocytes, granulocytes, circulating dendritic cells, immature myeloid cells, and myeloid derived suppressor cells (MDSCs). We also identified CD123 as an additional distinguishing marker for the phenotypic characterization of immature LIN-CD33+HLA-DR- MDSCs. Our approach permits the comprehensive analysis of all peripheral blood leukocytes and yields data that is highly amenable for standardization across inter-laboratory comparisons for human studies. PMID:25799053
Real Option in Capital Budgeting for SMEs: Insight from Steel Company
NASA Astrophysics Data System (ADS)
Muharam, F. M.; Tarrazon, M. A.
2017-06-01
Complex components of investment projects can only be analysed accurately if flexibility and comprehensive consideration of uncertainty are incorporated into valuation. Discounted cash flow (DCF) analysis has failed to cope with strategic future alternatives that affect the right value of investment projects. Real option valuation (ROV) proves to be the right tool for this purpose since it enables to calculate the enlarged or strategic Net Present Value (ENPV). This study attempts to provide an insight of the usage of ROV in capital budgeting and investment decision-making processes of SMEs. Exploring into the first stage processing of steel industry, analysis of alternatives to cancel, to expand, to defer or to abandon is performed. Completed with multiple options interaction and a sensitivity analysis, our findings prove that the application of ROV is beneficial for complex investment projects independently from the size of the company and particularly suitable in scenarios with scarce resources. The application of Real Option Valuation (ROV) is plausible and beneficial for SMEs to be incorporated in the strategic decision making process.
Greenwood, Edward JD; Matheson, Nicholas J; Wals, Kim; van den Boomen, Dick JH; Antrobus, Robin; Williamson, James C; Lehner, Paul J
2016-01-01
Viruses manipulate host factors to enhance their replication and evade cellular restriction. We used multiplex tandem mass tag (TMT)-based whole cell proteomics to perform a comprehensive time course analysis of >6500 viral and cellular proteins during HIV infection. To enable specific functional predictions, we categorized cellular proteins regulated by HIV according to their patterns of temporal expression. We focussed on proteins depleted with similar kinetics to APOBEC3C, and found the viral accessory protein Vif to be necessary and sufficient for CUL5-dependent proteasomal degradation of all members of the B56 family of regulatory subunits of the key cellular phosphatase PP2A (PPP2R5A-E). Quantitative phosphoproteomic analysis of HIV-infected cells confirmed Vif-dependent hyperphosphorylation of >200 cellular proteins, particularly substrates of the aurora kinases. The ability of Vif to target PPP2R5 subunits is found in primate and non-primate lentiviral lineages, and remodeling of the cellular phosphoproteome is therefore a second ancient and conserved Vif function. DOI: http://dx.doi.org/10.7554/eLife.18296.001 PMID:27690223
Ellrott, Kyle; Bailey, Matthew H; Saksena, Gordon; Covington, Kyle R; Kandoth, Cyriac; Stewart, Chip; Hess, Julian; Ma, Singer; Chiotti, Kami E; McLellan, Michael; Sofia, Heidi J; Hutter, Carolyn; Getz, Gad; Wheeler, David; Ding, Li
2018-03-28
The Cancer Genome Atlas (TCGA) cancer genomics dataset includes over 10,000 tumor-normal exome pairs across 33 different cancer types, in total >400 TB of raw data files requiring analysis. Here we describe the Multi-Center Mutation Calling in Multiple Cancers project, our effort to generate a comprehensive encyclopedia of somatic mutation calls for the TCGA data to enable robust cross-tumor-type analyses. Our approach accounts for variance and batch effects introduced by the rapid advancement of DNA extraction, hybridization-capture, sequencing, and analysis methods over time. We present best practices for applying an ensemble of seven mutation-calling algorithms with scoring and artifact filtering. The dataset created by this analysis includes 3.5 million somatic variants and forms the basis for PanCan Atlas papers. The results have been made available to the research community along with the methods used to generate them. This project is the result of collaboration from a number of institutes and demonstrates how team science drives extremely large genomics projects. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model
NASA Astrophysics Data System (ADS)
Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi
Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.
A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie
Hanke, Michael; Baumgartner, Florian J.; Ibe, Pierre; Kaule, Falko R.; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg
2014-01-01
Here we present a high-resolution functional magnetic resonance (fMRI) dataset – 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film (“Forrest Gump”). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures – from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized. PMID:25977761
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiter, Philipp; Musial, Walter; Smith, Aaron
This report describes a comprehensive effort undertaken by the National Renewable Energy Laboratory (NREL) to understand the cost of offshore wind energy for markets in the United States. The study models the cost impacts of a range of offshore wind locational cost variables for more than 7,000 potential coastal sites in U.S. offshore wind resource areas. It also assesses the impact of more than 50 technology innovations on potential future costs for both fixed-bottom and floating wind systems. Comparing these costs to an initial site-specific assessment of local avoided generating costs, the analysis provides a framework for estimating the economicmore » potential for offshore wind. The analysis is intended to inform a broad set of stakeholders and enable an assessment of offshore wind as part of energy development and energy portfolio planning. It provides information that federal and state agencies and planning commissions could use to inform initial strategic decisions about offshore wind developments in the United States.« less
Integrating System Dynamics and Bayesian Networks with Application to Counter-IED Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kenneth D.; Brothers, Alan J.; Whitney, Paul D.
2010-06-06
The practice of choosing a single modeling paradigm for predictive analysis can limit the scope and relevance of predictions and their utility to decision-making processes. Considering multiple modeling methods simultaneously may improve this situation, but a better solution provides a framework for directly integrating different, potentially complementary modeling paradigms to enable more comprehensive modeling and predictions, and thus better-informed decisions. The primary challenges of this kind of model integration are to bridge language and conceptual gaps between modeling paradigms, and to determine whether natural and useful linkages can be made in a formal mathematical manner. To address these challenges inmore » the context of two specific modeling paradigms, we explore mathematical and computational options for linking System Dynamics (SD) and Bayesian network (BN) models and incorporating data into the integrated models. We demonstrate that integrated SD/BN models can naturally be described as either state space equations or Dynamic Bayes Nets, which enables the use of many existing computational methods for simulation and data integration. To demonstrate, we apply our model integration approach to techno-social models of insurgent-led attacks and security force counter-measures centered on improvised explosive devices.« less
NASA Technical Reports Server (NTRS)
Friedman, S. Z.; Walker, R. E.; Aitken, R. B.
1986-01-01
The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.
Budczies, Jan; Klauschen, Frederick; Sinn, Bruno V.; Győrffy, Balázs; Schmitt, Wolfgang D.; Darb-Esfahani, Silvia; Denkert, Carsten
2012-01-01
Gene or protein expression data are usually represented by metric or at least ordinal variables. In order to translate a continuous variable into a clinical decision, it is necessary to determine a cutoff point and to stratify patients into two groups each requiring a different kind of treatment. Currently, there is no standard method or standard software for biomarker cutoff determination. Therefore, we developed Cutoff Finder, a bundle of optimization and visualization methods for cutoff determination that is accessible online. While one of the methods for cutoff optimization is based solely on the distribution of the marker under investigation, other methods optimize the correlation of the dichotomization with respect to an outcome or survival variable. We illustrate the functionality of Cutoff Finder by the analysis of the gene expression of estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer tissues. This distribution of these important markers is analyzed and correlated with immunohistologically determined ER status and distant metastasis free survival. Cutoff Finder is expected to fill a relevant gap in the available biometric software repertoire and will enable faster optimization of new diagnostic biomarkers. The tool can be accessed at http://molpath.charite.de/cutoff. PMID:23251644
Chandran, Anil Kumar Nalini; Yoo, Yo-Han; Cao, Peijian; Sharma, Rita; Sharma, Manoj; Dardick, Christopher; Ronald, Pamela C; Jung, Ki-Hong
2016-12-01
Protein kinases catalyze the transfer of a phosphate moiety from a phosphate donor to the substrate molecule, thus playing critical roles in cell signaling and metabolism. Although plant genomes contain more than 1000 genes that encode kinases, knowledge is limited about the function of each of these kinases. A major obstacle that hinders progress towards kinase characterization is functional redundancy. To address this challenge, we previously developed the rice kinase database (RKD) that integrated omics-scale data within a phylogenetics context. An updated version of rice kinase database (RKD) that contains metadata derived from NCBI GEO expression datasets has been developed. RKD 2.0 facilitates in-depth transcriptomic analyses of kinase-encoding genes in diverse rice tissues and in response to biotic and abiotic stresses and hormone treatments. We identified 261 kinases specifically expressed in particular tissues, 130 that are significantly up- regulated in response to biotic stress, 296 in response to abiotic stress, and 260 in response to hormones. Based on this update and Pearson correlation coefficient (PCC) analysis, we estimated that 19 out of 26 genes characterized through loss-of-function studies confer dominant functions. These were selected because they either had paralogous members with PCC values of <0.5 or had no paralog. Compared with the previous version of RKD, RKD 2.0 enables more effective estimations of functional redundancy or dominance because it uses comprehensive expression profiles rather than individual profiles. The integrated analysis of RKD with PCC establishes a single platform for researchers to select rice kinases for functional analyses.
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.
2017-01-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language (d = -0.80), but these deficits were not as severe as their reading…
NASA Orbital Debris Engineering Model ORDEM2008 (Beta Version)
NASA Technical Reports Server (NTRS)
Stansbery, Eugene G.; Krisko, Paula H.
2009-01-01
This is an interim document intended to accompany the beta-release of the ORDEM2008 model. As such it provides the user with a guide for its use, a list of its capabilities, a brief summary of model development, and appendices included to educate the user as to typical runtimes for different orbit configurations. More detailed documentation will be delivered with the final product. ORDEM2008 supersedes NASA's previous model - ORDEM2000. The availability of new sensor and in situ data, the re-analysis of older data, and the development of new analytical techniques, has enabled the construction of this more comprehensive and sophisticated model. Integrated with the software is an upgraded graphical user interface (GUI), which uses project-oriented organization and provides the user with graphical representations of numerous output data products. These range from the conventional average debris size vs. flux magnitude for chosen analysis orbits, to the more complex color-contoured two-dimensional (2-D) directional flux diagrams in terms of local spacecraft pitch and yaw.
Thought Leaders during Crises in Massive Social Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Farber, Robert M.; Reynolds, William
The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less
Design of pressure-driven microfluidic networks using electric circuit analogy.
Oh, Kwang W; Lee, Kangsun; Ahn, Byungwook; Furlani, Edward P
2012-02-07
This article reviews the application of electric circuit methods for the analysis of pressure-driven microfluidic networks with an emphasis on concentration- and flow-dependent systems. The application of circuit methods to microfluidics is based on the analogous behaviour of hydraulic and electric circuits with correlations of pressure to voltage, volumetric flow rate to current, and hydraulic to electric resistance. Circuit analysis enables rapid predictions of pressure-driven laminar flow in microchannels and is very useful for designing complex microfluidic networks in advance of fabrication. This article provides a comprehensive overview of the physics of pressure-driven laminar flow, the formal analogy between electric and hydraulic circuits, applications of circuit theory to microfluidic network-based devices, recent development and applications of concentration- and flow-dependent microfluidic networks, and promising future applications. The lab-on-a-chip (LOC) and microfluidics community will gain insightful ideas and practical design strategies for developing unique microfluidic network-based devices to address a broad range of biological, chemical, pharmaceutical, and other scientific and technical challenges.
Waks, Zeev; Weissbrod, Omer; Carmeli, Boaz; Norel, Raquel; Utro, Filippo; Goldschmidt, Yaara
2016-12-23
Compiling a comprehensive list of cancer driver genes is imperative for oncology diagnostics and drug development. While driver genes are typically discovered by analysis of tumor genomes, infrequently mutated driver genes often evade detection due to limited sample sizes. Here, we address sample size limitations by integrating tumor genomics data with a wide spectrum of gene-specific properties to search for rare drivers, functionally classify them, and detect features characteristic of driver genes. We show that our approach, CAnceR geNe similarity-based Annotator and Finder (CARNAF), enables detection of potentially novel drivers that eluded over a dozen pan-cancer/multi-tumor type studies. In particular, feature analysis reveals a highly concentrated pool of known and putative tumor suppressors among the <1% of genes that encode very large, chromatin-regulating proteins. Thus, our study highlights the need for deeper characterization of very large, epigenetic regulators in the context of cancer causality.
Does the microbiome and virome contribute to myalgic encephalomyelitis/chronic fatigue syndrome?
Newberry, Fiona; Hsieh, Shen-Yuan; Wileman, Tom; Carding, Simon R.
2018-01-01
Myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS) (ME/CFS) is a disabling and debilitating disease of unknown aetiology. It is a heterogeneous disease characterized by various inflammatory, immune, viral, neurological and endocrine symptoms. Several microbiome studies have described alterations in the bacterial component of the microbiome (dysbiosis) consistent with a possible role in disease development. However, in focusing on the bacterial components of the microbiome, these studies have neglected the viral constituent known as the virome. Viruses, particularly those infecting bacteria (bacteriophages), have the potential to alter the function and structure of the microbiome via gene transfer and host lysis. Viral-induced microbiome changes can directly and indirectly influence host health and disease. The contribution of viruses towards disease pathogenesis is therefore an important area for research in ME/CFS. Recent advancements in sequencing technology and bioinformatics now allow more comprehensive and inclusive investigations of human microbiomes. However, as the number of microbiome studies increases, the need for greater consistency in study design and analysis also increases. Comparisons between different ME/CFS microbiome studies are difficult because of differences in patient selection and diagnosis criteria, sample processing, genome sequencing and downstream bioinformatics analysis. It is therefore important that microbiome studies adopt robust, reproducible and consistent study design to enable more reliable and valid comparisons and conclusions to be made between studies. This article provides a comprehensive review of the current evidence supporting microbiome alterations in ME/CFS patients. Additionally, the pitfalls and challenges associated with microbiome studies are discussed. PMID:29523751
O'Flaherty, Brigid M; Li, Yan; Tao, Ying; Paden, Clinton R; Queen, Krista; Zhang, Jing; Dinwiddie, Darrell L; Gross, Stephen M; Schroth, Gary P; Tong, Suxiang
2018-06-01
Next generation sequencing (NGS) technologies have revolutionized the genomics field and are becoming more commonplace for identification of human infectious diseases. However, due to the low abundance of viral nucleic acids (NAs) in relation to host, viral identification using direct NGS technologies often lacks sufficient sensitivity. Here, we describe an approach based on two complementary enrichment strategies that significantly improves the sensitivity of NGS-based virus identification. To start, we developed two sets of DNA probes to enrich virus NAs associated with respiratory diseases. The first set of probes spans the genomes, allowing for identification of known viruses and full genome sequencing, while the second set targets regions conserved among viral families or genera, providing the ability to detect both known and potentially novel members of those virus groups. Efficiency of enrichment was assessed by NGS testing reference virus and clinical samples with known infection. We show significant improvement in viral identification using enriched NGS compared to unenriched NGS. Without enrichment, we observed an average of 0.3% targeted viral reads per sample. However, after enrichment, 50%-99% of the reads per sample were the targeted viral reads for both the reference isolates and clinical specimens using both probe sets. Importantly, dramatic improvements on genome coverage were also observed following virus-specific probe enrichment. The methods described here provide improved sensitivity for virus identification by NGS, allowing for a more comprehensive analysis of disease etiology. © 2018 O'Flaherty et al.; Published by Cold Spring Harbor Laboratory Press.
Patterson, Sara E; Liu, Rangjiao; Statz, Cara M; Durkin, Daniel; Lakshminarayana, Anuradha; Mockus, Susan M
2016-01-16
Precision medicine in oncology relies on rapid associations between patient-specific variations and targeted therapeutic efficacy. Due to the advancement of genomic analysis, a vast literature characterizing cancer-associated molecular aberrations and relative therapeutic relevance has been published. However, data are not uniformly reported or readily available, and accessing relevant information in a clinically acceptable time-frame is a daunting proposition, hampering connections between patients and appropriate therapeutic options. One important therapeutic avenue for oncology patients is through clinical trials. Accordingly, a global view into the availability of targeted clinical trials would provide insight into strengths and weaknesses and potentially enable research focus. However, data regarding the landscape of clinical trials in oncology is not readily available, and as a result, a comprehensive understanding of clinical trial availability is difficult. To support clinical decision-making, we have developed a data loader and mapper that connects sequence information from oncology patients to data stored in an in-house database, the JAX Clinical Knowledgebase (JAX-CKB), which can be queried readily to access comprehensive data for clinical reporting via customized reporting queries. JAX-CKB functions as a repository to house expertly curated clinically relevant data surrounding our 358-gene panel, the JAX Cancer Treatment Profile (JAX CTP), and supports annotation of functional significance of molecular variants. Through queries of data housed in JAX-CKB, we have analyzed the landscape of clinical trials relevant to our 358-gene targeted sequencing panel to evaluate strengths and weaknesses in current molecular targeting in oncology. Through this analysis, we have identified patient indications, molecular aberrations, and targeted therapy classes that have strong or weak representation in clinical trials. Here, we describe the development and disseminate system methods for associating patient genomic sequence data with clinically relevant information, facilitating interpretation and providing a mechanism for informing therapeutic decision-making. Additionally, through customized queries, we have the capability to rapidly analyze the landscape of targeted therapies in clinical trials, enabling a unique view into current therapeutic availability in oncology.
Theory, Instrumentation and Applications of Magnetoelastic Resonance Sensors: A Review
Grimes, Craig A.; Roy, Somnath C.; Rani, Sanju; Cai, Qingyun
2011-01-01
Thick-film magnetoelastic sensors vibrate mechanically in response to a time varying magnetic excitation field. The mechanical vibrations of the magnetostrictive magnetoelastic material launch, in turn, a magnetic field by which the sensor can be monitored. Magnetic field telemetry enables contact-less, remote-query operation that has enabled many practical uses of the sensor platform. This paper builds upon a review paper we published in Sensors in 2002 (Grimes, C.A.; et al. Sensors 2002, 2, 294–313), presenting a comprehensive review on the theory, operating principles, instrumentation and key applications of magnetoelastic sensing technology. PMID:22163768
Angularly-selective transmission imaging in a scanning electron microscope.
Holm, Jason; Keller, Robert R
2016-08-01
This work presents recent advances in transmission scanning electron microscopy (t-SEM) imaging control capabilities. A modular aperture system and a cantilever-style sample holder that enable comprehensive angular selectivity of forward-scattered electrons are described. When combined with a commercially available solid-state transmission detector having only basic bright-field and dark-field imaging capabilities, the advances described here enable numerous transmission imaging modes. Several examples are provided that demonstrate how contrast arising from diffraction to mass-thickness can be obtained. Unanticipated image contrast at some imaging conditions is also observed and addressed. Published by Elsevier B.V.
Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.
2018-01-01
Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370
Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models
NASA Astrophysics Data System (ADS)
Liu, Haiying
This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.
Jo, Kyuri; Kwon, Hawk-Bin; Kim, Sun
2014-06-01
Measuring expression levels of genes at the whole genome level can be useful for many purposes, especially for revealing biological pathways underlying specific phenotype conditions. When gene expression is measured over a time period, we have opportunities to understand how organisms react to stress conditions over time. Thus many biologists routinely measure whole genome level gene expressions at multiple time points. However, there are several technical difficulties for analyzing such whole genome expression data. In addition, these days gene expression data is often measured by using RNA-sequencing rather than microarray technologies and then analysis of expression data is much more complicated since the analysis process should start with mapping short reads and produce differentially activated pathways and also possibly interactions among pathways. In addition, many useful tools for analyzing microarray gene expression data are not applicable for the RNA-seq data. Thus a comprehensive package for analyzing time series transcriptome data is much needed. In this article, we present a comprehensive package, Time-series RNA-seq Analysis Package (TRAP), integrating all necessary tasks such as mapping short reads, measuring gene expression levels, finding differentially expressed genes (DEGs), clustering and pathway analysis for time-series data in a single environment. In addition to implementing useful algorithms that are not available for RNA-seq data, we extended existing pathway analysis methods, ORA and SPIA, for time series analysis and estimates statistical values for combined dataset by an advanced metric. TRAP also produces visual summary of pathway interactions. Gene expression change labeling, a practical clustering method used in TRAP, enables more accurate interpretation of the data when combined with pathway analysis. We applied our methods on a real dataset for the analysis of rice (Oryza sativa L. Japonica nipponbare) upon drought stress. The result showed that TRAP was able to detect pathways more accurately than several existing methods. TRAP is available at http://biohealth.snu.ac.kr/software/TRAP/. Copyright © 2014 Elsevier Inc. All rights reserved.
Tian, Q; Price, N D; Hood, L
2012-02-01
A grand challenge impeding optimal treatment outcomes for patients with cancer arises from the complex nature of the disease: the cellular heterogeneity, the myriad of dysfunctional molecular and genetic networks as results of genetic (somatic) and environmental perturbations. Systems biology, with its holistic approach to understanding fundamental principles in biology, and the empowering technologies in genomics, proteomics, single-cell analysis, microfluidics and computational strategies, enables a comprehensive approach to medicine, which strives to unveil the pathogenic mechanisms of diseases, identify disease biomarkers and begin thinking about new strategies for drug target discovery. The integration of multidimensional high-throughput 'omics' measurements from tumour tissues and corresponding blood specimens, together with new systems strategies for diagnostics, enables the identification of cancer biomarkers that will enable presymptomatic diagnosis, stratification of disease, assessment of disease progression, evaluation of patient response to therapy and the identification of reoccurrences. Whilst some aspects of systems medicine are being adopted in clinical oncology practice through companion molecular diagnostics for personalized therapy, the mounting influx of global quantitative data from both wellness and diseases is shaping up a transformational paradigm in medicine we termed 'predictive', 'preventive', 'personalized', and 'participatory' (P4) medicine, which requires new strategies, both scientific and organizational, to enable bringing this revolution in medicine to patients and to the healthcare system. P4 medicine will have a profound impact on society - transforming the healthcare system, turning around the ever escalating costs of healthcare, digitizing the practice of medicine and creating enormous economic opportunities for those organizations and nations that embrace this revolution. © 2011 The Association for the Publication of the Journal of Internal Medicine.
Hickey, William J; Shetty, Ameesha R; Massey, Randall J; Toso, Daniel B; Austin, Jotham
2017-01-01
Bacterial biofilms play key roles in environmental and biomedical processes, and understanding their activities requires comprehension of their nanoarchitectural characteristics. Electron microscopy (EM) is an essential tool for nanostructural analysis, but conventional EM methods are limited in that they either provide topographical information alone, or are suitable for imaging only relatively thin (<300 nm) sample volumes. For biofilm investigations, these are significant restrictions. Understanding structural relations between cells requires imaging of a sample volume sufficiently large to encompass multiple cells and the capture of both external and internal details of cell structure. An emerging EM technique with such capabilities is bright-field scanning transmission electron microscopy (BF-STEM) and in the present report BF-STEM was coupled with tomography to elucidate nanostructure in biofilms formed by the polycyclic aromatic hydrocarbon-degrading soil bacterium, Delftia acidovorans Cs1-4. Dual-axis BF-STEM enabled high-resolution 3-D tomographic recontructions (6-10 nm) visualization of thick (1250 and 1500 nm) sections. The 3-D data revealed that novel extracellular structures, termed nanopods, were polymorphic and formed complex networks within cell clusters. BF-STEM tomography enabled visualization of conduits formed by nanopods that could enable intercellular movement of outer membrane vesicles, and thereby enable direct communication between cells. This report is the first to document application of dual-axis BF-STEM tomography to obtain high-resolution 3-D images of novel nanostructures in bacterial biofilms. Future work with dual-axis BF-STEM tomography combined with correlative light electron microscopy may provide deeper insights into physiological functions associated with nanopods as well as other nanostructures. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
ERIC Educational Resources Information Center
Colenbrander, Danielle; Nickels, Lyndsey; Kohnen, Saskia
2017-01-01
Background: Identifying reading comprehension difficulties is challenging. There are many comprehension tests to choose from, and a child's diagnosis can be influenced by various factors such as a test's format and content and the choice of diagnostic criteria. We investigate these issues with reference to the Neale Analysis of Reading Ability…
ERIC Educational Resources Information Center
García, J. Ricardo; Cain, Kate
2014-01-01
The twofold purpose of this meta-analysis was to determine the relative importance of decoding skills to reading comprehension in reading development and to identify which reader characteristics and reading assessment characteristics contribute to differences in the decoding and reading comprehension correlation. A meta-analysis of 110 studies…
ERIC Educational Resources Information Center
Lonchamp, F.
This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…
A conceptual framework for assessing impacts of roads on aquatic biota
Paul L. Angermeier; Andrew P. Wheeler; Amanda E. Rosenberger
2004-01-01
Roads are pervasive in modern landscapes and adversely affect many aquatic ecosystems. Conventional environmental assessments of roads focus on construction impacts but ignore subsequent impacts. A comprehensive framework for considering all impacts of roads would enable scientists and managers to develop assessment tools that more accurately inform stakeholders and...
Reporting of NSC Additional (A2) Data Elements. Updated July 29, 2014
ERIC Educational Resources Information Center
National Student Clearinghouse, 2014
2014-01-01
Since the 2008-09 academic year, the National Student Clearinghouse has provided its participating institutions with the option to include 13 additional data elements in their enrollment submissions. These additional data elements help make Clearinghouse data more comprehensive and enable StudentTracker? participants to utilize a more robust data…
Toward an Understanding of the Emotional Nature of Stage Fright: A Three Factor Theory.
ERIC Educational Resources Information Center
Cahn, Dudley D.
A comprehensive understanding of stage fright will better enable teachers and researchers to select the most appropriate "cure" and to determine those cases in which speech training will help reduce stage fright or other states of communication apprehension. Attempts to understand stage fright have focused on three psychological theories…
ERIC Educational Resources Information Center
Kanan, Linda M.
2010-01-01
Much has been written about the use of threat assessment. Schools are encouraged to have threat assessment teams and a threat assessment process as part of a comprehensive safe schools effort. Encouraging and enabling all members of the school community to report possible threats in a timely manner is an essential component of an effective threat…
Understanding the Financial Bottom Line: Career Decisions and Money.
ERIC Educational Resources Information Center
Martellino, Carl Anthony
Educating career counselors and other practitioners in the career development field on at least the basics of financial planning concepts will enable them to provide clients with a more comprehensive approach to career decisions. A client with an understanding of financial planning basics will be better prepared as an informed, engaged, and…
The Southern Regional Education Board and Member States, January 2014
ERIC Educational Resources Information Center
Southern Regional Education Board (SREB), 2014
2014-01-01
This report provides an overview of Southern Regional Education Board (SREB) programs and services and how each member state participated in them from July 2012 through December 2013. The nation's first regional interstate compact for education, SREB is today the most comprehensive, bringing together states to enable them to achieve…
Adapting Creative and Relaxation Activities to Students with Cancer
ERIC Educational Resources Information Center
Jenko, Nika; Stopar, Mojca Lipec
2015-01-01
The team which forms a comprehensive treatment plan for students with cancer includes, among other experts, special educators. In cooperation with other team members, their role is to enable students to integrate in the educational process, having regard to their individual needs. In the present paper we introduce the study of specific methodical…
Climate Model Diagnostic Analyzer
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei
2015-01-01
The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.
Higher-Order Theory: Structural/MicroAnalysis Code (HOTSMAC) Developed
NASA Technical Reports Server (NTRS)
Arnold, Steven M.
2002-01-01
The full utilization of advanced materials (be they composite or functionally graded materials) in lightweight aerospace components requires the availability of accurate analysis, design, and life-prediction tools that enable the assessment of component and material performance and reliability. Recently, a new commercially available software product called HOTSMAC (Higher-Order Theory--Structural/MicroAnalysis Code) was jointly developed by Collier Research Corporation, Engineered Materials Concepts LLC, and the NASA Glenn Research Center under funding provided by Glenn's Commercial Technology Office. The analytical framework for HOTSMAC is based on almost a decade of research into the coupled micromacrostructural analysis of heterogeneous materials. Consequently, HOTSMAC offers a comprehensive approach for analyzing/designing the response of components with various microstructural details, including certain advantages not always available in standard displacement-based finite element analysis techniques. The capabilities of HOTSMAC include combined thermal and mechanical analysis, time-independent and time-dependent material behavior, and internal boundary cells (e.g., those that can be used to represent internal cooling passages, see the preceding figure) to name a few. In HOTSMAC problems, materials can be randomly distributed and/or functionally graded (as shown in the figure, wherein the inclusions are distributed linearly), or broken down by strata, such as in the case of thermal barrier coatings or composite laminates.
Spencer, Mercedes; Wagner, Richard K.
2016-01-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language (d = −0.80), but these deficits were not as severe as their reading comprehension deficit (d = −2.47). Second-language learners also had weaker oral language skills compared to native-speaking children regardless of comprehension status (d = −0.84). We discuss theoretical and practical implications of the finding that second-language learners who are poor at reading comprehension despite adequate decoding have deficits in oral language but the deficit is not sufficient to explain their deficit in reading comprehension. PMID:28461711
Spencer, Mercedes; Wagner, Richard K
2017-05-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language ( d = -0.80), but these deficits were not as severe as their reading comprehension deficit ( d = -2.47). Second-language learners also had weaker oral language skills compared to native-speaking children regardless of comprehension status ( d = -0.84). We discuss theoretical and practical implications of the finding that second-language learners who are poor at reading comprehension despite adequate decoding have deficits in oral language but the deficit is not sufficient to explain their deficit in reading comprehension.
Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)
NASA Astrophysics Data System (ADS)
Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.
2005-12-01
Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org
Zvonok, Nikolai; Xu, Wei; Williams, John; Janero, David R.; Krishnan, Srinivasan C.; Makriyannis, Alexandros
2013-01-01
The human cannabinoid 1 receptor (hCB1), a ubiquitous G protein-coupled receptor (GPCR), transmits cannabinergic signals that participate in diverse (patho)physiological processes. Pharmacotherapeutic hCB1 targeting is considered a tractable approach for treating such prevalent diseases as obesity, mood disorders, and drug addiction. The hydrophobic nature of the transmembrane helices of hCB1 presents a formidable difficulty to its direct structural analysis. Comprehensive experimental characterization of functional hCB1 by mass spectrometry (MS) is essential to the targeting of affinity probes that can be used to define directly hCB1 binding domains using a ligand-assisted experimental approach. Such information would greatly facilitate the rational design of hCB1-selective agonists/antagonists with therapeutic potential. We report the first high-coverage MS analysis of the primary sequence of the functional hCB1 receptor, one of the few such comprehensive MS-based analyses of any GPCR. Recombinant C-terminal hexa-histidine-tagged hCB1 (His6-hCB1) was expressed in cultured insect (Spodoptera frugiperda) cells, solubilized by a procedure devised to enhance receptor purity following metal-affinity chromatography, desalted by buffer exchange, and digested in solution with (chymo)-trypsin. “Bottom-up” nanoLC-MS/MS of the (chymo)tryptic digests afforded a degree of overall hCB1 coverage (>94%) thus far reported for only two other GPCRs. This MS-compatible procedure devised for His6-hCB1 sample preparation, incorporating in-solution (chymo)trypsin digestion in the presence of a low concentration of CYMAL-5 detergent, may be applicable to the MS-based proteomic characterization of other GPCRs. This work should help enable future ligand-assisted structural characterization of hCB1 binding motifs at the amino-acid level using rationally designed and targeted covalent cannabinergic probes. PMID:20131867
Comprehensive School Reform and Student Achievement: A Meta-Analysis.
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Hewes, Gina M.; Overman, Laura T.; Brown, Shelly
Using 232 studies, this meta analysis reviewed the research on the achievement effects of the nationally disseminated and externally developed school improvement programs known as "whole-school" or "comprehensive" reforms. In addition to reviewing the overall achievement effects of comprehensive school reform (CSR), the meta…
Mertz, Joseph; Tan, Haiyan; Pagala, Vishwajeeth; Bai, Bing; Chen, Ping-Chung; Li, Yuxin; Cho, Ji-Hoon; Shaw, Timothy; Wang, Xusheng; Peng, Junmin
2015-01-01
The mind bomb 1 (Mib1) ubiquitin ligase is essential for controlling metazoan development by Notch signaling and possibly the Wnt pathway. It is also expressed in postmitotic neurons and regulates neuronal morphogenesis and synaptic activity by mechanisms that are largely unknown. We sought to comprehensively characterize the Mib1 interactome and study its potential function in neuron development utilizing a novel sequential elution strategy for affinity purification, in which Mib1 binding proteins were eluted under different stringency and then quantified by the isobaric labeling method. The strategy identified the Mib1 interactome with both deep coverage and the ability to distinguish high-affinity partners from low-affinity partners. A total of 817 proteins were identified during the Mib1 affinity purification, including 56 high-affinity partners and 335 low-affinity partners, whereas the remaining 426 proteins are likely copurified contaminants or extremely weak binding proteins. The analysis detected all previously known Mib1-interacting proteins and revealed a large number of novel components involved in Notch and Wnt pathways, endocytosis and vesicle transport, the ubiquitin-proteasome system, cellular morphogenesis, and synaptic activities. Immunofluorescence studies further showed colocalization of Mib1 with five selected proteins: the Usp9x (FAM) deubiquitinating enzyme, alpha-, beta-, and delta-catenins, and CDKL5. Mutations of CDKL5 are associated with early infantile epileptic encephalopathy-2 (EIEE2), a severe form of mental retardation. We found that the expression of Mib1 down-regulated the protein level of CDKL5 by ubiquitination, and antagonized CDKL5 function during the formation of dendritic spines. Thus, the sequential elution strategy enables biochemical characterization of protein interactomes; and Mib1 analysis provides a comprehensive interactome for investigating its role in signaling networks and neuronal development. PMID:25931508
Jarque-Bou, N; Gracia-Ibáñez, V; Sancho-Bru, J L; Vergara, M; Pérez-González, A; Andrés, F J
2016-09-01
The kinematic analysis of human grasping is challenging because of the high number of degrees of freedom involved. The use of principal component and factorial analyses is proposed in the present study to reduce the hand kinematics dimensionality in the analysis of posture for ergonomic purposes, allowing for a comprehensive study without losing accuracy while also enabling velocity and acceleration analyses to be performed. A laboratory study was designed to analyse the effect of weight and diameter in the grasping posture for cylinders. This study measured the hand posture from six subjects when transporting cylinders of different weights and diameters with precision and power grasps. The hand posture was measured using a Vicon(®) motion-tracking system, and the principal component analysis was applied to reduce the kinematics dimensionality. Different ANOVAs were performed on the reduced kinematic variables to check the effect of weight and diameter of the cylinders, as well as that of the subject. The results show that the original twenty-three degrees of freedom of the hand were reduced to five, which were identified as digit arching, closeness, palmar arching, finger adduction and thumb opposition. Both cylinder diameter and weight significantly affected the precision grasping posture: diameter affects closeness, palmar arching and opposition, while weight affects digit arching, palmar arching and closeness. The power-grasping posture was mainly affected by the cylinder diameter, through digit arching, closeness and opposition. The grasping posture was largely affected by the subject factor and this effect couldn't be attributed only to hand size. In conclusion, this kinematic reduction allowed identifying the effect of the diameter and weight of the cylinders in a comprehensive way, being diameter more important than weight. Copyright © 2016 Elsevier Ltd. All rights reserved.
Unbiased Characterization of Anopheles Mosquito Blood Meals by Targeted High-Throughput Sequencing
Logue, Kyle; Keven, John Bosco; Cannon, Matthew V.; Reimer, Lisa; Siba, Peter; Walker, Edward D.; Zimmerman, Peter A.; Serre, David
2016-01-01
Understanding mosquito host choice is important for assessing vector competence or identifying disease reservoirs. Unfortunately, the availability of an unbiased method for comprehensively evaluating the composition of insect blood meals is very limited, as most current molecular assays only test for the presence of a few pre-selected species. These approaches also have limited ability to identify the presence of multiple mammalian hosts in a single blood meal. Here, we describe a novel high-throughput sequencing method that enables analysis of 96 mosquitoes simultaneously and provides a comprehensive and quantitative perspective on the composition of each blood meal. We validated in silico that universal primers targeting the mammalian mitochondrial 16S ribosomal RNA genes (16S rRNA) should amplify more than 95% of the mammalian 16S rRNA sequences present in the NCBI nucleotide database. We applied this method to 442 female Anopheles punctulatus s. l. mosquitoes collected in Papua New Guinea (PNG). While human (52.9%), dog (15.8%) and pig (29.2%) were the most common hosts identified in our study, we also detected DNA from mice, one marsupial species and two bat species. Our analyses also revealed that 16.3% of the mosquitoes fed on more than one host. Analysis of the human mitochondrial hypervariable region I in 102 human blood meals showed that 5 (4.9%) of the mosquitoes unambiguously fed on more than one person. Overall, analysis of PNG mosquitoes illustrates the potential of this approach to identify unsuspected hosts and characterize mixed blood meals, and shows how this approach can be adapted to evaluate inter-individual variations among human blood meals. Furthermore, this approach can be applied to any disease-transmitting arthropod and can be easily customized to investigate non-mammalian host sources. PMID:26963245
Qiu, Shi; Yang, Wen-Zhi; Shi, Xiao-Jian; Yao, Chang-Liang; Yang, Min; Liu, Xuan; Jiang, Bao-Hong; Wu, Wan-Ying; Guo, De-An
2015-09-17
Exploration of new natural compounds is of vital significance for drug discovery and development. The conventional approaches by systematic phytochemical isolation are low-efficiency and consume masses of organic solvent. This study presents an integrated strategy that combines offline comprehensive two-dimensional liquid chromatography, hybrid linear ion-trap/Orbitrap mass spectrometry, and NMR analysis (2D LC/LTQ-Orbitrap-MS/NMR), aimed to establish a green protocol for the efficient discovery of new natural molecules. A comprehensive chemical analysis of the total ginsenosides of stems and leaves of Panax ginseng (SLP), a cardiovascular disease medicine, was performed following this strategy. An offline 2D LC system was constructed with an orthogonality of 0.79 and a practical peak capacity of 11,000. The much greener UHPLC separation and LTQ-Orbitrap-MS detection by data-dependent high-energy C-trap dissociation (HCD)/dynamic exclusion were employed for separation and characterization of ginsenosides from thirteen fractionated SLP samples. Consequently, a total of 646 ginsenosides were characterized, and 427 have not been isolated from the genus of Panax L. The ginsenosides identified from SLP exhibited distinct sapogenin diversity and molecular isomerism. NMR analysis was finally employed to verify and offer complementary structural information to MS-oriented characterization. The established 2D LC/LTQ-Orbitrap-MS/NMR approach outperforms the conventional approaches in respect of significantly improved efficiency, much less use of drug materials and organic solvent. The integrated strategy enables a deep investigation on the therapeutic basis of an herbal medicine, and facilitates new compounds discovery in an efficient and environmentally friendly manner as well. Copyright © 2015 Elsevier B.V. All rights reserved.
Network Analytical Tool for Monitoring Global Food Safety Highlights China
Nepusz, Tamás; Petróczi, Andrea; Naughton, Declan P.
2009-01-01
Background The Beijing Declaration on food safety and security was signed by over fifty countries with the aim of developing comprehensive programs for monitoring food safety and security on behalf of their citizens. Currently, comprehensive systems for food safety and security are absent in many countries, and the systems that are in place have been developed on different principles allowing poor opportunities for integration. Methodology/Principal Findings We have developed a user-friendly analytical tool based on network approaches for instant customized analysis of food alert patterns in the European dataset from the Rapid Alert System for Food and Feed. Data taken from alert logs between January 2003 – August 2008 were processed using network analysis to i) capture complexity, ii) analyze trends, and iii) predict possible effects of interventions by identifying patterns of reporting activities between countries. The detector and transgressor relationships are readily identifiable between countries which are ranked using i) Google's PageRank algorithm and ii) the HITS algorithm of Kleinberg. The program identifies Iran, China and Turkey as the transgressors with the largest number of alerts. However, when characterized by impact, counting the transgressor index and the number of countries involved, China predominates as a transgressor country. Conclusions/Significance This study reports the first development of a network analysis approach to inform countries on their transgressor and detector profiles as a user-friendly aid for the adoption of the Beijing Declaration. The ability to instantly access the country-specific components of the several thousand annual reports will enable each country to identify the major transgressors and detectors within its trading network. Moreover, the tool can be used to monitor trading countries for improved detector/transgressor ratios. PMID:19688088
Chetty, Verusia; Hanass-Hancock, Jill
2016-01-01
In the era of widespread access to antiretroviral therapy, people living with HIV survive; however, this comes with new experiences of comorbidities and HIV-related disability posing new challenges to rehabilitation professionals and an already fragile health system in Southern Africa. Public health approaches to HIV need to include not only prevention, treatment and support but also rehabilitation. While some well-resourced countries have developed rehabilitation approaches for HIV, resource-poor settings of Southern Africa lack a model of care that includes rehabilitation approaches providing accessible and comprehensive care for people living with HIV. In this study, a learning in action approach was used to conceptualize a comprehensive model of care that addresses HIV-related disability and a feasible rehabilitation framework for resource-poor settings. The study used qualitative methods in the form of a focus group discussion with thirty participants including people living with HIV, the multidisciplinary healthcare team and community outreach partners at a semi-rural health facility in South Africa. The discussion focused on barriers and enablers of access to rehabilitation. Participants identified barriers at various levels, including transport, physical access, financial constraints and poor multi-stakeholder team interaction. The results of the group discussions informed the design of an inclusive model of HIV care. This model was further informed by established integrated rehabilitation models. Participants emphasized that objectives need to respond to policy, improve access to patient-centered care and maintain a multidisciplinary team approach. They proposed that guiding principles should include efficient communication, collaboration of all stakeholders and leadership in teams to enable staff to implement the model. Training of professional staff and lay personnel within task-shifting approaches was seen as an essential enabler to implementation. The health facility as well as outreach services such as intermediate clinics, home-based care, outreach and community-based rehabilitation was identified as important structures for potential rehabilitation interventions.
A business case evaluation of workplace engineering noise control: a net-cost model.
Lahiri, Supriya; Low, Colleen; Barry, Michael
2011-03-01
This article provides a convenient tool for companies to determine the costs and benefits of alternative interventions to prevent noise-induced hearing loss (NIHL). Contextualized for Singapore and in collaboration with Singapore's Ministry of Manpower, the Net-Cost model evaluates costs of intervention for equipment and labor, avoided costs of productivity losses and medical care, and productivity gains from the employer's economic perspective. To pilot this approach, four case studies are presented, with varying degrees of economic benefits to the employer, including one in which multifactor productivity is the main driver. Although compliance agencies may not require economic analysis of NIHL, given scarce resources in a market-driven economy, this tool enables stakeholders to understand and compare the costs and benefits of NIHL interventions comprehensively and helps in determining risk management strategies.
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Levelized cost of energy for a Backward Bent Duct Buoy
Bull, Diana; Jenne, D. Scott; Smith, Christopher S.; ...
2016-07-18
The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publicly available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Furthermore, comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less
On-site comprehensive analysis of explosives using HPLC-UV-PAED
NASA Astrophysics Data System (ADS)
Marple, Ronita L.; LaCourse, William R.
2004-03-01
High-performance liquid chromatography with ultra violet and photo-assisted electrochemical detection (HPLC-UV-PAED) has been developed for the sensitive and selective detection of explosives in ground water and soil extracts. Fractionation and preconcentration of explosives is accomplished with on-line solid phase extraction (SPE), which minimizes sample pretreatment and enables faster and more accurate on-site assessment of a contaminated site. Detection limits are equivalent or superior (i.e., <1 part-per-trillion for HMX) to those achieved using the Environmental Protection Agency (EPA) Method 8330. This approach is more broadly applicable, as it is capable of determining a wider range of organic nitro compounds. Soil samples are extracted using pressurized fluid extraction (PFE), and this technique is automatable, field-compatible, and environmentally friendly, adding to the overall efficiency of the methodology.
Levelized cost of energy for a Backward Bent Duct Buoy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana; Jenne, D. Scott; Smith, Christopher S.
2016-12-01
The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publically available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less
New Training and Diagnostic Stategies to Counteract Muscle and Bone Loss in Microgravity
NASA Astrophysics Data System (ADS)
Talla, R.; Adamcik, G.; Barta, N.; Kozlovskaya, I. B.; Tschan, H.; Bachl, N.; Angeli, T.
2013-02-01
The Multifunctional Dynamometer for Application in Space (MDS) is a cooperation between the Vienna University of Technology, the Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow and the University of Vienna to prevent the deterioration of the musculoskeletal and cardiovascular system under the influence of microgravity. It is considered that the loading intensity might be crucial to support certain physiological parameters. The MDS offers a variety of exercises with the main focus on sites which are most affected by atrophy and is able to generate high training forces by using an electrical motor. Moreover the usage of the electrical motor enables different training modes for each exercise. A comprehensive analysis, including isokinetic and isometric tests provides the monitoring of the progress and compliance of the users. The MDS was implemented in the MARS 500 project.
Navigating legal constraints in clinical data warehousing: a case study in personalized medicine
Jefferys, Benjamin R.; Nwankwo, Iheanyi; Neri, Elias; Chang, David C. W.; Shamardin, Lev; Hänold, Stefanie; Graf, Norbert; Forgó, Nikolaus; Coveney, Peter
2013-01-01
Personalized medicine relies in part upon comprehensive data on patient treatment and outcomes, both for analysis leading to improved models that provide the basis for enhanced treatment, and for direct use in clinical decision-making. A data warehouse is an information technology for combining and standardizing multiple databases. Data warehousing of clinical data is constrained by many legal and ethical considerations, owing to the sensitive nature of the data being stored. We describe an unconstrained clinical data warehousing architecture, some of the legal constraints that have led us to reconsider this architecture, and the legal and technical solutions to these constraints developed for the clinical data warehouse in the personalized medicine project p-medicine. We also propose some changes to the legal constraints that will further enable clinical research. PMID:24427531
Upgrade of the Surface Spectrometer at NEPOMUC for PAES, XPS and STM Investigations
NASA Astrophysics Data System (ADS)
Zimnik, S.; Lippert, F.; Hugenschmidt, C.
2014-04-01
The characterization of the elemental composition of surfaces is of great importance for the understanding of many surface processes, such as surface segregation or oxidation. Positron-annihilation-induced Auger Electron Spectroscopy (PAES) is a powerful technique for gathering information about the elemental composition of only the topmost atomic layer of a sample. The upgraded surface spectrometer at NEPOMUC (NEtron induced POsitron source MUniCh) enables a comprehensive surface analysis with the complementary techniques STM, XPS and PAES. A new X-ray source for X-ray induced photoelectron spectroscopy (XPS) was installed to gather additional information on oxidation states. A new scanning tunneling microscope (STM) is used as a complementary method to investigate with atomic resolution the surface electron density. The combination of PAES, XPS and STM allows the characterization of both the elemental composition, and the surface topology.
Deep stroma investigation by confocal microscopy
NASA Astrophysics Data System (ADS)
Rossi, Francesca; Tatini, Francesca; Pini, Roberto; Valente, Paola; Ardia, Roberta; Buzzonetti, Luca; Canovetti, Annalisa; Malandrini, Alex; Lenzetti, Ivo; Menabuoni, Luca
2015-03-01
Laser assisted keratoplasty is nowadays largely used to perform minimally invasive surgery and partial thickness keratoplasty [1-3]. The use of the femtosecond laser enables to perform a customized surgery, solving the specific problem of the single patient, designing new graft profiles and partial thickness keratoplasty (PTK). The common characteristics of the PTKs and that make them eligible respect to the standard penetrating keratoplasty, are: the preservation of eyeball integrity, a reduced risk of graft rejection, a controlled postoperative astigmatism. On the other hand, the optimal surgical results after these PTKs are related to a correct comprehension of the deep stroma layers morphology, which can help in the identification of the correct cleavage plane during surgeries. In the last years some studies were published, giving new insights about the posterior stroma morphology in adult subjects [4,5]. In this work we present a study performed on two groups of tissues: one group is from 20 adult subjects aged 59 +/- 18 y.o., and the other group is from 15 young subjects, aged 12+/-5 y.o.. The samples were from tissues not suitable for transplant in patients. Confocal microscopy and Environmental Scanning Electron Microscopy (ESEM) were used for the analysis of the deep stroma. The preliminary results of this analysis show the main differences in between young and adult tissues, enabling to improve the knowledge of the morphology and of the biomechanical properties of human cornea, in order to improve the surgical results in partial thickness keratoplasty.
Hrovatin, Karin; Kunej, Tanja
2018-01-01
Erstwhile, sex was determined by observation, which is not always feasible. Nowadays, genetic methods are prevailing due to their accuracy, simplicity, low costs, and time-efficiency. However, there is no comprehensive review enabling overview and development of the field. The studies are heterogeneous, lacking a standardized reporting strategy. Therefore, our aim was to collect genetic sexing assays for mammals and assemble them in a catalogue with unified terminology. Publications were extracted from online databases using key words such as sexing and molecular. The collected data were supplemented with species and gene IDs and the type of sex-specific sequence variant (SSSV). We developed a catalogue and graphic presentation of diagnostic tests for molecular sex determination of mammals, based on 58 papers published from 2/1991 to 10/2016. The catalogue consists of five categories: species, genes, SSSVs, methods, and references. Based on the analysis of published literature, we propose minimal requirements for reporting, consisting of: species scientific name and ID, genetic sequence with name and ID, SSSV, methodology, genomic coordinates (e.g., restriction sites, SSSVs), amplification system, and description of detected amplicon and controls. The present study summarizes vast knowledge that has up to now been scattered across databases, representing the first step toward standardization regarding molecular sexing, enabling a better overview of existing tests and facilitating planned designs of novel tests. The project is ongoing; collecting additional publications, optimizing field development, and standardizing data presentation are needed.
Natural and Undetermined Sudden Death: Value of Post-Mortem Genetic Investigation
Fernández-Falgueras, Anna; Sarquella-Brugada, Georgia; Cesar, Sergi; Mademont, Irene; Mates, Jesus; Pérez-Serra, Alexandra; Coll, Monica; Pico, Ferran; Iglesias, Anna; Tirón, Coloma; Allegue, Catarina; Carro, Esther; Gallego, María Ángeles; Ferrer-Costa, Carles; Hospital, Anna; Bardalet, Narcís; Borondo, Juan Carlos; Vingut, Albert; Arbelo, Elena; Brugada, Josep; Castellà, Josep; Medallo, Jordi; Brugada, Ramon
2016-01-01
Background Sudden unexplained death may be the first manifestation of an unknown inherited cardiac disease. Current genetic technologies may enable the unraveling of an etiology and the identification of relatives at risk. The aim of our study was to define the etiology of natural deaths, younger than 50 years of age, and to investigate whether genetic defects associated with cardiac diseases could provide a potential etiology for the unexplained cases. Methods and Findings Our cohort included a total of 789 consecutive cases (77.19% males) <50 years old (average 38.6±12.2 years old) who died suddenly from non-violent causes. A comprehensive autopsy was performed according to current forensic guidelines. During autopsy a cause of death was identified in most cases (81.1%), mainly due to cardiac alterations (56.87%). In unexplained cases, genetic analysis of the main genes associated with sudden cardiac death was performed using Next Generation Sequencing technology. Genetic analysis was performed in suspected inherited diseases (cardiomyopathy) and in unexplained death, with identification of potentially pathogenic variants in nearly 50% and 40% of samples, respectively. Conclusions Cardiac disease is the most important cause of sudden death, especially after the age of 40. Close to 10% of cases may remain unexplained after a complete autopsy investigation. Molecular autopsy may provide an explanation for a significant part of these unexplained cases. Identification of genetic variations enables genetic counseling and undertaking of preventive measures in relatives at risk. PMID:27930701
NASA Astrophysics Data System (ADS)
Carsughi, Flavio; Fonseca, Luis
2017-06-01
NFFA-EUROPE is an European open access resource for experimental and theoretical nanoscience and sets out a platform to carry out comprehensive projects for multidisciplinary research at the nanoscale extending from synthesis to nanocharacterization to theory and numerical simulation. Advanced infrastructures specialized on growth, nano-lithography, nano-characterization, theory and simulation and fine-analysis with Synchrotron, FEL and Neutron radiation sources are integrated in a multi-site combination to develop frontier research on methods for reproducible nanoscience research and to enable European and international researchers from diverse disciplines to carry out advanced proposals impacting science and innovation. NFFA-EUROPE will enable coordinated access to infrastructures on different aspects of nanoscience research that is not currently available at single specialized ones and without duplicating their specific scopes. Approved user projects will have access to the best suited instruments and support competences for performing the research, including access to analytical large scale facilities, theory and simulation and high-performance computing facilities. Access is offered free of charge to European users and users will receive a financial contribution for their travel, accommodation and subsistence costs. The users access will include several "installations" and will be coordinated through a single entry point portal that will activate an advanced user-infrastructure dialogue to build up a personalized access programme with an increasing return on science and innovation production. The own research activity of NFFA-EUROPE will address key bottlenecks of nanoscience research: nanostructure traceability, protocol reproducibility, in-operando nano-manipulation and analysis, open data.
Enabling Data Access for Environmental Monitoring: SERVIR West Africa
NASA Astrophysics Data System (ADS)
Yetman, G.; de Sherbinin, A. M.
2017-12-01
SERVIR is a join effort between NASA and the U.S. Agency for International Development to form regional partnerships and bring satellite-based earth monitoring and geographic information technologies to bear on environmental issues. The recently established SERVIR node for West Africa aims to "connect space to villages" and enable response to environmental change at the national and local level through partnering with a network of organizations in the region. Comprehensive services—data streams, analysis methods and algorithms, and information products for decision making—to support environmental monitoring of five critical issues identified by West African network members are being designed and developed: ephemeral water, charcoal production, locusts, groundwater, and land use/land cover change. Additionally, climate change information is critical for planning and context in each of these issues. The selection of data and methods is a collaborative effort, with experts in the region working with experts at NASA and the scientific community to best meet information monitoring requirements. Design and delivery of these services requires capacity development in a number of areas, including best practices in data management, analysis methods for combining multiple data streams, and information technology infrastructure. Two research centers at Columbia University are implementing partners for SERVIR West Africa, acting to support capacity development in network members through a combination of workshops, training, and implementation of technologies in the region. The presentation will focus on efforts by these centers to assess current capabilities and improve capacity through gathering requirements, system design, technology selection, technology deployment, training, and workshops.
Dose Monitoring in Radiology Departments: Status Quo and Future Perspectives.
Boos, J; Meineke, A; Bethge, O T; Antoch, G; Kröpil, P
2016-05-01
The number of computed tomography examinations has continuously increased over the last decades and accounts for a major part of the collective radiation dose from medical investigations. For purposes of quality assurance in modern radiology a systematic monitoring and analysis of dose related data from radiological examinations is mandatory. Various ways of collecting dose data are available today, for example the Digital Imaging and Communication in Medicine - Structured Report (DICOM-SR), optical character recognition and DICOM-modality performed procedure steps (MPPS). The DICOM-SR is part of the DICOM-standard and provides the DICOM-Radiation Dose Structured Report, which is an easily applicable and comprehensive solution to collect radiation dose parameters. This standard simplifies the process of data collection and enables comprehensive dose monitoring. Various commercial dose monitoring software devices with varying characteristics are available today. In this article, we discuss legal obligations, various ways to monitor dose data, current dose monitoring software solutions and future perspectives in regard to the EU Council Directive 2013/59/EURATOM. • Automated, systematic dose monitoring is an important element in quality assurance of radiology departments. • DICOM-RDSR-capable CT scanners facilitate the monitoring of dose data. • A variety of commercial and non-commercial dose monitoring software tools are available today. • Successful dose monitoring requires comprehensive infrastructure for monitoring, analysing and optimizing radiation exposure. Citation Format: • Boos J, Meineke A, Bethge OT et al. Dose Monitoring in Radiology Departments: Status Quo and Future Perspectives. Fortschr Röntgenstr 2016; 188: 443 - 450. © Georg Thieme Verlag KG Stuttgart · New York.
Impact of comprehensive two-dimensional gas chromatography with mass spectrometry on food analysis.
Tranchida, Peter Q; Purcaro, Giorgia; Maimone, Mariarosa; Mondello, Luigi
2016-01-01
Comprehensive two-dimensional gas chromatography with mass spectrometry has been on the separation-science scene for about 15 years. This three-dimensional method has made a great positive impact on various fields of research, and among these that related to food analysis is certainly at the forefront. The present critical review is based on the use of comprehensive two-dimensional gas chromatography with mass spectrometry in the untargeted (general qualitative profiling and fingerprinting) and targeted analysis of food volatiles; attention is focused not only on its potential in such applications, but also on how recent advances in comprehensive two-dimensional gas chromatography with mass spectrometry will potentially be important for food analysis. Additionally, emphasis is devoted to the many instances in which straightforward gas chromatography with mass spectrometry is a sufficiently-powerful analytical tool. Finally, possible future scenarios in the comprehensive two-dimensional gas chromatography with mass spectrometry food analysis field are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stinson, Michael S; Stevenson, Susan
2013-01-01
Twenty-two college students who were deaf viewed one instructional video with standard captions and a second with expanded captions, in which key terms were expanded in the form of vocabulary definitions, labeled illustrations, or concept maps. The students performed better on a posttest after viewing either type of caption than on a pretest; however, there was no difference in comprehension between standard and expanded captions. Camtasia recording software enabled examination of the extent to which the students accessed the expanded captions. The students accessed less than 20% of the available expanded captions. Thus, one explanation for the lack of difference in comprehension between the standard and expanded captions is that the students did not access the expanded captions sufficiently. Despite limited use of the expanded captions, the students stated, when interviewed, that they considered these captions beneficial in learning from the instructional video.
Direct-access retrieval during sentence comprehension: Evidence from Sluicing
Martin, Andrea E.; McElree, Brian
2011-01-01
Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ (Martin & McElree, 2008, 2009). Sluicing, or inflectional phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb phrase ellipsis. We present speed-accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables. PMID:21580797
Engler, Karen S; MacGregor, Cynthia J
2018-01-01
At a time when deaf education teacher preparation programs are declining in number, little is known about their actual effectiveness. A phenomenological case study of a graduate-level comprehensive deaf education teacher preparation program at a midwestern university explored empowered and enabled learning of teacher candidates using the Missouri Department of Elementary and Secondary Education educator pillars: (a) commitment to the profession, (b) proficiency in practice, and (c) learning impact, all deemed critical to developing quality teachers. A strong connection was found between the program's comprehensive philosophy and its practice. Embracing diversity of d/Deafness and differentiated instruction were the most prevalent themes expressed by participants. Teacher candidates displayed outstanding commitment to the profession and high proficiency in practice. The findings suggest that additional consideration should be given to classroom and behavior management, teacher candidate workload, teaching beyond academics, and preparation for navigating the public school system.
Instruction of Research-Based Comprehension Strategies in Basal Reading Programs
ERIC Educational Resources Information Center
Pilonieta, Paola
2010-01-01
Research supports using research-based comprehension strategies; however, comprehension strategy instruction is not highly visible in basal reading programs or classroom instruction, resulting in many students who struggle with comprehension. A content analysis examined which research-based comprehension strategies were presented in five…
Comprehensive rotorcraft analysis methods
NASA Technical Reports Server (NTRS)
Stephens, Wendell B.; Austin, Edward E.
1988-01-01
The development and application of comprehensive rotorcraft analysis methods in the field of rotorcraft technology are described. These large scale analyses and the resulting computer programs are intended to treat the complex aeromechanical phenomena that describe the behavior of rotorcraft. They may be used to predict rotor aerodynamics, acoustic, performance, stability and control, handling qualities, loads and vibrations, structures, dynamics, and aeroelastic stability characteristics for a variety of applications including research, preliminary and detail design, and evaluation and treatment of field problems. The principal comprehensive methods developed or under development in recent years and generally available to the rotorcraft community because of US Army Aviation Research and Technology Activity (ARTA) sponsorship of all or part of the software systems are the Rotorcraft Flight Simulation (C81), Dynamic System Coupler (DYSCO), Coupled Rotor/Airframe Vibration Analysis Program (SIMVIB), Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD), General Rotorcraft Aeromechanical Stability Program (GRASP), and Second Generation Comprehensive Helicopter Analysis System (2GCHAS).
Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu
2017-10-02
During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis. Moreover, since the automatically chosen two-group sample test is verified to be outperformed, MetaComp is friendly to users without adequate statistical training. These improvements are aiming to overcome the new challenges under big data era for all meta-omics data. MetaComp is available at: http://cqb.pku.edu.cn/ZhuLab/MetaComp/ and https://github.com/pzhaipku/MetaComp/ .
DOT National Transportation Integrated Search
1995-05-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
DOT National Transportation Integrated Search
1994-07-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
DOT National Transportation Integrated Search
1995-02-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
Collin, Jeff; Amos, Amanda
2016-01-01
Introduction: Coalitions of supporters of comprehensive tobacco control policy have been crucial in achieving policy success nationally and internationally, but the dynamics of such alliances are not well understood. Methods: Qualitative semi-structured, narrative interviews with 35 stakeholders involved in developing the European Council Recommendation on smoke-free environments. These were thematically analyzed to examine the dynamics of coalition-building, collaboration and leadership in the alliance of organizations which successfully called for the development of comprehensive European Union (EU) smoke-free policy. Results: An alliance of tobacco control and public health advocacy organizations, scientific institutions, professional bodies, pharmaceutical companies, and other actors shared the goal of fighting the harms caused by second-hand smoke. Alliance members jointly called for comprehensive EU smoke-free policy and the protection of the political debates from tobacco industry interference. The alliance’s success was enabled by a core group of national and European actors with long-standing experience in tobacco control, who facilitated consensus-building, mobilized allies and synchronized the actions of policy supporters. Representatives of Brussels-based organizations emerged as crucial strategic leaders. Conclusions: The insights gained and identification of key enablers of successful tobacco control advocacy highlight the strategic importance of investing into tobacco control at European level. Those interested in effective health policy can apply lessons learned from EU smoke-free policy to build effective alliances in tobacco control and other areas of public health. PMID:25634938
VarioML framework for comprehensive variation data representation and exchange.
Byrne, Myles; Fokkema, Ivo Fac; Lancaster, Owen; Adamusiak, Tomasz; Ahonen-Bishopp, Anni; Atlan, David; Béroud, Christophe; Cornell, Michael; Dalgleish, Raymond; Devereau, Andrew; Patrinos, George P; Swertz, Morris A; Taschner, Peter Em; Thorisson, Gudmundur A; Vihinen, Mauno; Brookes, Anthony J; Muilu, Juha
2012-10-03
Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs) e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity.
VarioML framework for comprehensive variation data representation and exchange
2012-01-01
Background Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. Results The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs) e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. Conclusions VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity. PMID:23031277
Weishaar, Heide; Collin, Jeff; Amos, Amanda
2016-02-01
Coalitions of supporters of comprehensive tobacco control policy have been crucial in achieving policy success nationally and internationally, but the dynamics of such alliances are not well understood. Qualitative semi-structured, narrative interviews with 35 stakeholders involved in developing the European Council Recommendation on smoke-free environments. These were thematically analyzed to examine the dynamics of coalition-building, collaboration and leadership in the alliance of organizations which successfully called for the development of comprehensive European Union (EU) smoke-free policy. An alliance of tobacco control and public health advocacy organizations, scientific institutions, professional bodies, pharmaceutical companies, and other actors shared the goal of fighting the harms caused by second-hand smoke. Alliance members jointly called for comprehensive EU smoke-free policy and the protection of the political debates from tobacco industry interference. The alliance's success was enabled by a core group of national and European actors with long-standing experience in tobacco control, who facilitated consensus-building, mobilized allies and synchronized the actions of policy supporters. Representatives of Brussels-based organizations emerged as crucial strategic leaders. The insights gained and identification of key enablers of successful tobacco control advocacy highlight the strategic importance of investing into tobacco control at European level. Those interested in effective health policy can apply lessons learned from EU smoke-free policy to build effective alliances in tobacco control and other areas of public health. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco.
Gilchrist, Elizabeth S; Nesterenko, Pavel N; Smith, Norman W; Barron, Leon P
2015-03-20
There has recently been increased interest in coupling ion chromatography (IC) to high resolution mass spectrometry (HRMS) to enable highly sensitive and selective analysis. Herein, the first comprehensive study focusing on the direct coupling of suppressed IC to HRMS without the need for post-suppressor organic solvent modification is presented. Chromatographic selectivity and added HRMS sensitivity offered by organic solvent-modified IC eluents on a modern hyper-crosslinked polymeric anion-exchange resin (IonPac AS18) are shown using isocratic eluents containing 5-50 mM hydroxide with 0-80% methanol or acetonitrile for a range of low molecular weight anions (<165 Da). Comprehensive experiments on IC thermodynamics over a temperature range between 20-45 °C with the eluent containing up to 60% of acetonitrile or methanol revealed markedly different retention behaviour and selectivity for the selected analytes on the same polymer based ion-exchange resin. Optimised sensitivity with HRMS was achieved with as low as 30-40% organic eluent content. Analytical performance characteristics are presented and compared with other IC-MS based works. This study also presents the first application of IC-HRMS to forensic detection of trace low-order anionic explosive residues in latent human fingermarks. Copyright © 2015 Elsevier B.V. All rights reserved.
Evensen, Stig; Wisløff, Torbjørn; Lystad, June Ullevoldsæter; Bull, Helen; Ueland, Torill; Falkum, Erik
2016-03-01
Schizophrenia is associated with recurrent hospitalizations, need for long-term community support, poor social functioning, and low employment rates. Despite the wide- ranging financial and social burdens associated with the illness, there is great uncertainty regarding prevalence, employment rates, and the societal costs of schizophrenia. The current study investigates 12-month prevalence of patients treated for schizophrenia, employment rates, and cost of schizophrenia using a population-based top-down approach. Data were obtained from comprehensive and mandatory health and welfare registers in Norway. We identified a 12-month prevalence of 0.17% for the entire population. The employment rate among working-age individuals was 10.24%. The societal costs for the 12-month period were USD 890 million. The average cost per individual with schizophrenia was USD 106 thousand. Inpatient care and lost productivity due to high unemployment represented 33% and 29%, respectively, of the total costs. The use of mandatory health and welfare registers enabled a unique and informative analysis on true population-based datasets. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Paulo, Joao A.; O’Connell, Jeremy D.; Gaun, Aleksandr; Gygi, Steven P.
2015-01-01
The global proteomic alterations in the budding yeast Saccharomyces cerevisiae due to differences in carbon sources can be comprehensively examined using mass spectrometry–based multiplexing strategies. In this study, we investigate changes in the S. cerevisiae proteome resulting from cultures grown in minimal media using galactose, glucose, or raffinose as the carbon source. We used a tandem mass tag 9-plex strategy to determine alterations in relative protein abundance due to a particular carbon source, in triplicate, thereby permitting subsequent statistical analyses. We quantified more than 4700 proteins across all nine samples; 1003 proteins demonstrated statistically significant differences in abundance in at least one condition. The majority of altered proteins were classified as functioning in metabolic processes and as having cellular origins of plasma membrane and mitochondria. In contrast, proteins remaining relatively unchanged in abundance included those having nucleic acid–related processes, such as transcription and RNA processing. In addition, the comprehensiveness of the data set enabled the analysis of subsets of functionally related proteins, such as phosphatases, kinases, and transcription factors. As a resource, these data can be mined further in efforts to understand better the roles of carbon source fermentation in yeast metabolic pathways and the alterations observed therein, potentially for industrial applications, such as biofuel feedstock production. PMID:26399295
Top-down Proteomics: Technology Advancements and Applications to Heart Diseases
Cai, Wenxuan; Tucholski, Trisha M.; Gregorich, Zachery R.; Ge, Ying
2016-01-01
Introduction Diseases of the heart are a leading cause of morbidity and mortality for both men and women worldwide, and impose significant economic burdens on the healthcare systems. Despite substantial effort over the last several decades, the molecular mechanisms underlying diseases of the heart remain poorly understood. Areas covered Altered protein post-translational modifications (PTMs) and protein isoform switching are increasingly recognized as important disease mechanisms. Top-down high-resolution mass spectrometry (MS)-based proteomics has emerged as the most powerful method for the comprehensive analysis of PTMs and protein isoforms. Here, we will review recent technology developments in the field of top-down proteomics, as well as highlight recent studies utilizing top-down proteomics to decipher the cardiac proteome for the understanding of the molecular mechanisms underlying diseases of the heart. Expert commentary Top-down proteomics is a premier method for the global and comprehensive study of protein isoforms and their PTMs, enabling the identification of novel protein isoforms and PTMs, characterization of sequence variations, and quantification of disease-associated alterations. Despite significant challenges, continuous development of top-down proteomics technology will greatly aid the dissection of the molecular mechanisms underlying diseases of the hearts for the identification of novel biomarkers and therapeutic targets. PMID:27448560
Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades
2014-01-01
microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035
Towards the Application of Fuzzy Logic for Developing a Novel Indoor Air Quality Index (FIAQI).
Javid, Allahbakhsh; Hamedian, Amir Abbas; Gharibi, Hamed; Sowlat, Mohammad Hossein
2016-02-01
In the past few decades, Indoor Air Pollution (IAP) has become a primary concern to the point. It is increasingly believed to be of equal or greater importance to human health compared to ambient air. However, due to the lack of comprehensive indices for the integrated assessment of indoor air quality (IAQ), we aimed to develop a novel, Fuzzy-Based Indoor Air Quality Index (FIAQI) to bridge the existing gap in this area. We based our index on fuzzy logic, which enables us to overcome the limitations of traditional methods applied to develop environmental quality indices. Fifteen parameters, including the criteria air pollutants, volatile organic compounds, and bioaerosols were included in the FIAQI due mainly to their significant health effects. Weighting factors were assigned to the parameters based on the medical evidence available in the literature on their health effects. The final FIAQI consisted of 108 rules. In order to demonstrate the performance of the index, data were intentionally generated to cover a variety of quality levels. In addition, a sensitivity analysis was conducted to assess the validity of the index. The FIAQI tends to be a comprehensive tool to classify IAQ and produce accurate results. It seems useful and reliable to be considered by authorities to assess IAQ environments.
Systems Biology to Support Nanomaterial Grouping.
Riebeling, Christian; Jungnickel, Harald; Luch, Andreas; Haase, Andrea
2017-01-01
The assessment of potential health risks of engineered nanomaterials (ENMs) is a challenging task due to the high number and great variety of already existing and newly emerging ENMs. Reliable grouping or categorization of ENMs with respect to hazards could help to facilitate prioritization and decision making for regulatory purposes. The development of grouping criteria, however, requires a broad and comprehensive data basis. A promising platform addressing this challenge is the systems biology approach. The different areas of systems biology, most prominently transcriptomics, proteomics and metabolomics, each of which provide a wealth of data that can be used to reveal novel biomarkers and biological pathways involved in the mode-of-action of ENMs. Combining such data with classical toxicological data would enable a more comprehensive understanding and hence might lead to more powerful and reliable prediction models. Physico-chemical data provide crucial information on the ENMs and need to be integrated, too. Overall statistical analysis should reveal robust grouping and categorization criteria and may ultimately help to identify meaningful biomarkers and biological pathways that sufficiently characterize the corresponding ENM subgroups. This chapter aims to give an overview on the different systems biology technologies and their current applications in the field of nanotoxicology, as well as to identify the existing challenges.
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Koshy, Remya; Ranawat, Anop; Scaria, Vinod
2017-10-01
Middle East and North Africa (MENA) encompass very unique populations, with a rich history and encompasses characteristic ethnic, linguistic and genetic diversity. The genetic diversity of MENA region has been largely unknown. The recent availability of whole-exome and whole-genome sequences from the region has made it possible to collect population-specific allele frequencies. The integration of data sets from this region would provide insights into the landscape of genetic variants in this region. We integrated genetic variants from multiple data sets systematically, available from this region to create a compendium of over 26 million genetic variations. The variants were systematically annotated and their allele frequencies in the data sets were computed and available as a web interface which enables quick query. As a proof of principle for application of the compendium for genetic epidemiology, we analyzed the allele frequencies for variants in transglutaminase 1 (TGM1) gene, associated with autosomal recessive lamellar ichthyosis. Our analysis revealed that the carrier frequency of selected variants differed widely with significant interethnic differences. To the best of our knowledge, al mena is the first and most comprehensive repertoire of genetic variations from the Arab, Middle Eastern and North African region. We hope al mena would accelerate Precision Medicine in the region.
Understanding USGS user needs and Earth observing data use for decision making
NASA Astrophysics Data System (ADS)
Wu, Z.
2016-12-01
US Geological Survey (USGS) initiated the Requirements, Capabilities and Analysis for Earth Observations (RCA-EO) project in the Land Remote Sensing (LRS) program, collaborating with the National Oceanic and Atmospheric Administration (NOAA) to jointly develop the supporting information infrastructure - The Earth Observation Requirements Evaluation Systems (EORES). RCA-EO enables us to collect information on current data products and projects across the USGS and evaluate the impacts of Earth observation data from all sources, including spaceborne, airborne, and ground-based platforms. EORES allows users to query, filter, and analyze usage and impacts of Earth observation data at different organizational level within the bureau. We engaged over 500 subject matter experts and evaluated more than 1000 different Earth observing data sources and products. RCA-EO provides a comprehensive way to evaluate impacts of Earth observing data on USGS mission areas and programs through the survey of 345 key USGS products and services. We paid special attention to user feedback about Earth observing data to inform decision making on improving user satisfaction. We believe the approach and philosophy of RCA-EO can be applied in much broader scope to derive comprehensive knowledge of Earth observing systems impacts and usage and inform data products development and remote sensing technology innovation.
Towards the Application of Fuzzy Logic for Developing a Novel Indoor Air Quality Index (FIAQI)
JAVID, Allahbakhsh; HAMEDIAN, Amir Abbas; GHARIBI, Hamed; SOWLAT, Mohammad Hossein
2016-01-01
Background: In the past few decades, Indoor Air Pollution (IAP) has become a primary concern to the point. It is increasingly believed to be of equal or greater importance to human health compared to ambient air. However, due to the lack of comprehensive indices for the integrated assessment of indoor air quality (IAQ), we aimed to develop a novel, Fuzzy-Based Indoor Air Quality Index (FIAQI) to bridge the existing gap in this area. Methods: We based our index on fuzzy logic, which enables us to overcome the limitations of traditional methods applied to develop environmental quality indices. Fifteen parameters, including the criteria air pollutants, volatile organic compounds, and bioaerosols were included in the FIAQI due mainly to their significant health effects. Weighting factors were assigned to the parameters based on the medical evidence available in the literature on their health effects. The final FIAQI consisted of 108 rules. In order to demonstrate the performance of the index, data were intentionally generated to cover a variety of quality levels. In addition, a sensitivity analysis was conducted to assess the validity of the index. Results: The FIAQI tends to be a comprehensive tool to classify IAQ and produce accurate results. Conclusion: It seems useful and reliable to be considered by authorities to assess IAQ environments. PMID:27114985
Tapia-Conyer, Roberto; Saucedo-Martinez, Rodrigo; Mujica-Rosales, Ricardo; Gallardo-Rincon, Hector; Campos-Rivera, Paola Abril; Lee, Evan; Waugh, Craig; Guajardo, Lucia; Torres-Beltran, Braulio; Quijano-Gonzalez, Ursula; Soni-Gallardo, Lidia
2016-07-22
The Mexican healthcare system is under increasing strain due to the rising prevalence of non-communicable diseases (especially type 2 diabetes), mounting costs, and a reactive curative approach focused on treating existing diseases and their complications rather than preventing them. Casalud is a comprehensive primary healthcare model that enables proactive prevention and disease management throughout the continuum of care, using innovative technologies and a patient-centred approach. Data were collected over a 2-year period in eight primary health clinics (PHCs) in two states in central Mexico to identify and assess enablers and inhibitors of the implementation process of Casalud. We used mixed quantitative and qualitative data collection tools: surveys, in-depth interviews, and participant and non-participant observations. Transcripts and field notes were analyzed and coded using Framework Analysis, focusing on defining and describing enablers and inhibitors of the implementation process. We identified seven recurring topics in the analyzed textual data. Four topics were categorized as enablers: political support for the Casalud model, alignment with current healthcare trends, ongoing technical improvements (to ease adoption and support), and capacity building. Three topics were categorized as inhibitors: administrative practices, health clinic human resources, and the lack of a shared vision of the model. Enablers are located at PHCs and across all levels of government, and include political support for, and the technological validity of, the model. The main inhibitor is the persistence of obsolete administrative practices at both state and PHC levels, which puts the administrative feasibility of the model's implementation in jeopardy. Constructing a shared vision around the model could facilitate the implementation of Casalud as well as circumvent administrative inhibitors. In order to overcome PHC-level barriers, it is crucial to have an efficient and straightforward adaptation and updating process for technological tools. One of the key lessons learned from the implementation of the Casalud model is that a degree of uncertainty must be tolerated when quickly scaling up a healthcare intervention. Similar patient-centred technology-based models must remain open to change and be able to quickly adapt to changing circumstances.
Yang, Shuai; Zhang, Xinlei; Diao, Lihong; Guo, Feifei; Wang, Dan; Liu, Zhongyang; Li, Honglei; Zheng, Junjie; Pan, Jingshan; Nice, Edouard C; Li, Dong; He, Fuchu
2015-09-04
The Chromosome-centric Human Proteome Project (C-HPP) aims to catalog genome-encoded proteins using a chromosome-by-chromosome strategy. As the C-HPP proceeds, the increasing requirement for data-intensive analysis of the MS/MS data poses a challenge to the proteomic community, especially small laboratories lacking computational infrastructure. To address this challenge, we have updated the previous CAPER browser into a higher version, CAPER 3.0, which is a scalable cloud-based system for data-intensive analysis of C-HPP data sets. CAPER 3.0 uses cloud computing technology to facilitate MS/MS-based peptide identification. In particular, it can use both public and private cloud, facilitating the analysis of C-HPP data sets. CAPER 3.0 provides a graphical user interface (GUI) to help users transfer data, configure jobs, track progress, and visualize the results comprehensively. These features enable users without programming expertise to easily conduct data-intensive analysis using CAPER 3.0. Here, we illustrate the usage of CAPER 3.0 with four specific mass spectral data-intensive problems: detecting novel peptides, identifying single amino acid variants (SAVs) derived from known missense mutations, identifying sample-specific SAVs, and identifying exon-skipping events. CAPER 3.0 is available at http://prodigy.bprc.ac.cn/caper3.
Maintenance and Exchange of Learning Objects in a Web Services Based e-Learning System
ERIC Educational Resources Information Center
Vossen, Gottfried; Westerkamp, Peter
2004-01-01
"Web services" enable partners to exploit applications via the Internet. Individual services can be composed to build new and more complex ones with additional and more comprehensive functionality. In this paper, we apply the Web service paradigm to electronic learning, and show how to exchange and maintain learning objects is a…
A Study of the Homogeneity of Items Produced From Item Forms Across Different Taxonomic Levels.
ERIC Educational Resources Information Center
Weber, Margaret B.; Argo, Jana K.
This study determined whether item forms ( rules for constructing items related to a domain or set of tasks) would enable naive item writers to generate multiple-choice items at three taxonomic levels--knowledge, comprehension, and application. Students wrote 120 multiple-choice items from 20 item forms, corresponding to educational objectives…
ERIC Educational Resources Information Center
Industry Education Council (Hamilton-Wentworth), Hamilton (Ontario).
This handbook provides basic information about forming and operating an industry-education partnership council (IEPC). Purpose of the handbook is to enable interested communities to replicate the model IEPC and develop strong local industry-education alliances. Part I is an introduction to industry-education partnerships. Part II focuses on…
Guiding Readers and Writers, Grades 3-6: Teaching Comprehension, Genre, and Content Literacy.
ERIC Educational Resources Information Center
Fountas, Irene C.; Pinnell, Gay Su
Exploring all the essential components of a quality upper elementary literacy program, this book is a resource for fostering success that will enable students to enjoy a future filled with literacy journeys. Sections of the book address: special help for struggling readers and writers; a basic structure of the literacy program within a framework…
Interfacing Simulations with Training Content
2006-09-01
a panelist at numerous international training and elearning conferences, ADL Plugfests and IMS Global Learning Consortium Open Technical Forums. Dr...communication technologies has enabled higher quality learning to be made available through increasingly sophisticated modes of presentation. Traditional...However, learning is a comprehensive process which does not simply consist of the transmission and learning of content. While simulations offer the
Well-planning programs give students field-like experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sifferman, T.R.; Chapman, L.
1983-01-01
The University of Tulsa recently was given a package of computer well planning and drilling programs that will enable petroleum engineering students to gain valuable experience in designing well programs while still in school. Comprehensive homework assignments are now given in areas of drilling fluids programing, hydraulics, directional wells and surveying. Additional programs are scheduled for next semester.
ERIC Educational Resources Information Center
Tatar, Nilgün; Akpinar, Ercan; Feyzioglu, Eylem Yildiz
2013-01-01
The purpose of this study is to investigate the effect of computer-assisted learning integrated with metacognitive prompts on elementary students' affective skills on the subject of electricity. The researchers developed educational software to enable students to easily and comprehensively learn the concepts in the subject of electricity. A…
ERIC Educational Resources Information Center
Sabonis-Chafee, Terry, Ed.
Information is presented on internships in the Washington, D.C., area that enable students to explore the effects of technology and science on society. Science and engineering student interns work in nonlaboratory environments, and nontechnical students may work in issue areas and newly emerging public policy challenges. The directory includes…
ERIC Educational Resources Information Center
Lai, Chun
2015-01-01
Learning takes place across different social contexts, and understanding how learners perceive and traverse different learning contexts enables educators to gain a more comprehensive view of their learning processes and to support their learning better. This study examined how undergraduate foreign language learners perceived their learning…
Transforming K-12 Rural Education through Blended Learning: Barriers and Promising Practices
ERIC Educational Resources Information Center
Werth, Eric; Werth, Lori; Kellerer, Eric
2013-01-01
This report describes the implementation of blended learning programs in Idaho, and three key takeaways are apparent: (1) Blended learning has a positive impact on teachers; (2) Self-pacing enables students to take ownership and achieve mastery; and (3) Teachers must prepare with comprehensive teacher training. The authors emphasize the need for…
An Oasis in This Desert: Parents Talk about the New York City Beacons.
ERIC Educational Resources Information Center
Nevarez, Nancy
This report presents the findings of focus groups convened to determine what the parents of youth participants in the New York City Beacons think about the program. The Beacons initiative is a comprehensive model of school-community-family partnerships undertaken by New York City in 1991. The initiative originally enabled 10 community-based…
Professional Development Needs of School Principals in the Context of Educational Reform
ERIC Educational Resources Information Center
Hussin, Sufean; Al Abri, Saleh
2015-01-01
Retraining and upskilling of human resources in organizations are deemed vital whenever a reform takes place, or whenever a huge policy is being implemented on a comprehensive scale. In an education system, officers, principals, and teachers need to be retrained so as to enable them implement and manage new changes, which are manifested in the…
ERIC Educational Resources Information Center
Lu, Owen H. T.; Huang, Jeff C. H.; Huang, Anna Y. Q.; Yang, Stephen J. H.
2017-01-01
As information technology continues to evolve rapidly, programming skills become increasingly crucial. To be able to construct superb programming skills, the training must begin before college or even senior high school. However, when developing comprehensive training programmers, the learning and teaching processes must be considered. In order to…
ERIC Educational Resources Information Center
Caruana, Viv; Montgomery, Catherine
2015-01-01
This article presents a comprehensive review of research on transnational higher education published between 2006 and 2014. It aims to provide an overview of a highly complex field that is both nascent and shifting, with research developing unevenly and concentrated in particular areas. This overview will enable academics working in transnational…
Elaborative Feedback to Enhance Online Second Language Reading Comprehension
ERIC Educational Resources Information Center
Bown, Andy
2017-01-01
Many higher education students across the world are studying in situations where a high proportion of the academic materials they encounter are online reading texts written in their second language. While the online medium presents a number of challenges to L2 readers, it also enables the provision of a range of support mechanisms, such as instant…
Method Improving Reading Comprehension In Primary Education Program Students
NASA Astrophysics Data System (ADS)
Rohana
2018-01-01
This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is one-group pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills.
Low, Eric; Bountra, Chas; Lee, Wen Hwa
2016-01-01
We are experiencing a new era enabled by unencumbered access to high quality data through the emergence of open science initiatives in the historically challenging area of early stage drug discovery. At the same time, many patient-centric organisations are taking matters into their own hands by participating in, enabling and funding research. Here we present the rationale behind the innovative partnership between the Structural Genomics Consortium (SGC)-an open, pre-competitive pre-clinical research consortium and the research-focused patient organisation Myeloma UK to create a new, comprehensive platform to accelerate the discovery and development of new treatments for multiple myeloma.
ERIC Educational Resources Information Center
Robson, Holly; Keidel, James L.; Lambon Ralph, Matthew A.; Sage, Karen
2012-01-01
Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds…
Robson, Holly; Keidel, James L; Ralph, Matthew A Lambon; Sage, Karen
2012-01-01
Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein, Baker, and Goodglass (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke's aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke's aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke's aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke's aphasia and favour models of language which propose a leftward asymmetry in phonological analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wong, Bernice Y. L.
1986-01-01
Successful instructional strategies for enhancing the reading comprehension and comprehension test performance of learning disabled students are described. Students are taught to self-monitor their comprehension of expository materials and stories through recognition and analysis of recurrent elements and problem passages, content summarization,…
Comprehensibility of traffic signs among urban drivers in Turkey.
Kirmizioglu, Erkut; Tuydes-Yaman, Hediye
2012-03-01
Traffic signs are commonly used traffic safety tools, mainly developed to provide crucial information in a short time to support safe drive; but the success depends on their comprehensibility by the drivers. Also, a sudden change in the traditionally used and accepted signs can cause significant safety problem, as in the case of cancellation of red oblique bars in 2004 as a part of the European Union Harmonization Process of Turkey. Having a severe traffic safety problem in Turkey, a need to assess both the comprehensibility of internationally accepted traffic signs and current level of driver education, was the main motivation behind this study. A paper-based survey study in 2009 that reached a sample of 1478 urban drivers in the City of Ankara, focused on the determination of comprehensibility of 30 selected traffic signs, which are commonly used and critical for safety, including two recently changed signs. The meaning of each sign is sought using an open-ended question format to capture different levels and types of comprehensions, which enabled the detection of "opposite" and "partially correct" answers besides "wrong" and "correct" ones. High comprehensibility of 9 control group signs shows the validity of the study. The recently changed signs are among the oppositely associated ones proving the increased risk in traffic safety and need for more aggressive campaigning to publicize them. Copyright © 2011 Elsevier Ltd. All rights reserved.
INDIGO – INtegrated Data Warehouse of MIcrobial GenOmes with Examples from the Red Sea Extremophiles
Alam, Intikhab; Antunes, André; Kamau, Allan Anthony; Ba alawi, Wail; Kalkatawi, Manal; Stingl, Ulrich; Bajic, Vladimir B.
2013-01-01
Background The next generation sequencing technologies substantially increased the throughput of microbial genome sequencing. To functionally annotate newly sequenced microbial genomes, a variety of experimental and computational methods are used. Integration of information from different sources is a powerful approach to enhance such annotation. Functional analysis of microbial genomes, necessary for downstream experiments, crucially depends on this annotation but it is hampered by the current lack of suitable information integration and exploration systems for microbial genomes. Results We developed a data warehouse system (INDIGO) that enables the integration of annotations for exploration and analysis of newly sequenced microbial genomes. INDIGO offers an opportunity to construct complex queries and combine annotations from multiple sources starting from genomic sequence to protein domain, gene ontology and pathway levels. This data warehouse is aimed at being populated with information from genomes of pure cultures and uncultured single cells of Red Sea bacteria and Archaea. Currently, INDIGO contains information from Salinisphaera shabanensis, Haloplasma contractile, and Halorhabdus tiamatea - extremophiles isolated from deep-sea anoxic brine lakes of the Red Sea. We provide examples of utilizing the system to gain new insights into specific aspects on the unique lifestyle and adaptations of these organisms to extreme environments. Conclusions We developed a data warehouse system, INDIGO, which enables comprehensive integration of information from various resources to be used for annotation, exploration and analysis of microbial genomes. It will be regularly updated and extended with new genomes. It is aimed to serve as a resource dedicated to the Red Sea microbes. In addition, through INDIGO, we provide our Automatic Annotation of Microbial Genomes (AAMG) pipeline. The INDIGO web server is freely available at http://www.cbrc.kaust.edu.sa/indigo. PMID:24324765
SLAE–CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-01-01
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE–CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE–CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE–CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology. PMID:28657577
PyContact: Rapid, Customizable, and Visual Analysis of Noncovalent Interactions in MD Simulations.
Scheurer, Maximilian; Rodenkirch, Peter; Siggel, Marc; Bernardi, Rafael C; Schulten, Klaus; Tajkhorshid, Emad; Rudack, Till
2018-02-06
Molecular dynamics (MD) simulations have become ubiquitous in all areas of life sciences. The size and model complexity of MD simulations are rapidly growing along with increasing computing power and improved algorithms. This growth has led to the production of a large amount of simulation data that need to be filtered for relevant information to address specific biomedical and biochemical questions. One of the most relevant molecular properties that can be investigated by all-atom MD simulations is the time-dependent evolution of the complex noncovalent interaction networks governing such fundamental aspects as molecular recognition, binding strength, and mechanical and structural stability. Extracting, evaluating, and visualizing noncovalent interactions is a key task in the daily work of structural biologists. We have developed PyContact, an easy-to-use, highly flexible, and intuitive graphical user interface-based application, designed to provide a toolkit to investigate biomolecular interactions in MD trajectories. PyContact is designed to facilitate this task by enabling identification of relevant noncovalent interactions in a comprehensible manner. The implementation of PyContact as a standalone application enables rapid analysis and data visualization without any additional programming requirements, and also preserves full in-program customization and extension capabilities for advanced users. The statistical analysis representation is interactively combined with full mapping of the results on the molecular system through the synergistic connection between PyContact and VMD. We showcase the capabilities and scientific significance of PyContact by analyzing and visualizing in great detail the noncovalent interactions underlying the ion permeation pathway of the human P2X 3 receptor. As a second application, we examine the protein-protein interaction network of the mechanically ultrastable cohesin-dockering complex. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
SLAE-CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies.
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-06-28
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE-CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE-CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE-CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology.
Alam, Intikhab; Antunes, André; Kamau, Allan Anthony; Ba Alawi, Wail; Kalkatawi, Manal; Stingl, Ulrich; Bajic, Vladimir B
2013-01-01
The next generation sequencing technologies substantially increased the throughput of microbial genome sequencing. To functionally annotate newly sequenced microbial genomes, a variety of experimental and computational methods are used. Integration of information from different sources is a powerful approach to enhance such annotation. Functional analysis of microbial genomes, necessary for downstream experiments, crucially depends on this annotation but it is hampered by the current lack of suitable information integration and exploration systems for microbial genomes. We developed a data warehouse system (INDIGO) that enables the integration of annotations for exploration and analysis of newly sequenced microbial genomes. INDIGO offers an opportunity to construct complex queries and combine annotations from multiple sources starting from genomic sequence to protein domain, gene ontology and pathway levels. This data warehouse is aimed at being populated with information from genomes of pure cultures and uncultured single cells of Red Sea bacteria and Archaea. Currently, INDIGO contains information from Salinisphaera shabanensis, Haloplasma contractile, and Halorhabdus tiamatea - extremophiles isolated from deep-sea anoxic brine lakes of the Red Sea. We provide examples of utilizing the system to gain new insights into specific aspects on the unique lifestyle and adaptations of these organisms to extreme environments. We developed a data warehouse system, INDIGO, which enables comprehensive integration of information from various resources to be used for annotation, exploration and analysis of microbial genomes. It will be regularly updated and extended with new genomes. It is aimed to serve as a resource dedicated to the Red Sea microbes. In addition, through INDIGO, we provide our Automatic Annotation of Microbial Genomes (AAMG) pipeline. The INDIGO web server is freely available at http://www.cbrc.kaust.edu.sa/indigo.
Watanabe, Toshiki
2017-03-02
Adult T-cell leukemia (ATL) is an aggressive T-cell malignancy caused by human T-cell leukemia virus type 1 (HTLV-1) that develops through a multistep carcinogenesis process involving 5 or more genetic events. We provide a comprehensive overview of recently uncovered information on the molecular basis of leukemogenesis in ATL. Broadly, the landscape of genetic abnormalities in ATL that include alterations highly enriched in genes for T-cell receptor-NF-κB signaling such as PLCG1 , PRKCB , and CARD11 and gain-of function mutations in CCR4 and CCR7 Conversely, the epigenetic landscape of ATL can be summarized as polycomb repressive complex 2 hyperactivation with genome-wide H3K27 me3 accumulation as the basis of the unique transcriptome of ATL cells. Expression of H3K27 methyltransferase enhancer of zeste 2 was shown to be induced by HTLV-1 Tax and NF-κB. Furthermore, provirus integration site analysis with high-throughput sequencing enabled the analysis of clonal composition and cell number of each clone in vivo, whereas multicolor flow cytometric analysis with CD7 and cell adhesion molecule 1 enabled the identification of HTLV-1-infected CD4 + T cells in vivo. Sorted immortalized but untransformed cells displayed epigenetic changes closely overlapping those observed in terminally transformed ATL cells, suggesting that epigenetic abnormalities are likely earlier events in leukemogenesis. These new findings broaden the scope of conceptualization of the molecular mechanisms of leukemogenesis, dissecting them into immortalization and clonal progression. These recent findings also open a new direction of drug development for ATL prevention and treatment because epigenetic marks can be reprogrammed. Mechanisms underlying initial immortalization and progressive accumulation of these abnormalities remain to be elucidated. © 2017 by The American Society of Hematology.
Huang, Yen-Ming; Shiyanbola, Olayinka O; Smith, Paul D; Chan, Hsun-Yu
2018-01-01
Introduction The Newest Vital Sign (NVS) is a survey designed to measure general health literacy whereby an interviewer asks six questions related to information printed on a nutritional label from an ice cream container. It enables researchers to evaluate several health literacy dimensions in a short period of time, including document literacy, comprehension, quantitative literacy (numeracy), application, and evaluation. No study has empirically examined which items belong to which latent dimensions of health literacy in the NVS using factor analysis. Identifying the factor structure of the NVS would enable health care providers to choose appropriate intervention strategies to address patients’ health literacy as well as improve their health outcomes accordingly. This study aimed to explore the factor structure of the NVS that is used to assess multiple dimensions of health literacy. Methods A cross-sectional study administering the NVS in a face-to-face manner was conducted at two family medicine clinics in the USA. One hundred and seventy four individuals who participated were at least 20 years old, diagnosed with type 2 diabetes, prescribed at least one oral diabetes medicine, and used English as their primary language. Exploratory factor analysis and confirmatory factor analysis were conducted to investigate the factor structure of the NVS. Results Numeracy and document literacy are two dimensions of health literacy that were identified and accounted for 63.05% of the variance in the NVS. Internal consistency (Cronbach’s alpha) of the NVS were 0.78 and 0.91 for numeracy and document literacy, respectively. Conclusion Numeracy and document literacy appropriately represent the factor structure of the NVS and may be used for assessing health literacy in greater detail for patients with type 2 diabetes. PMID:29844661
Comprehensive benefit analysis of regional water resources based on multi-objective evaluation
NASA Astrophysics Data System (ADS)
Chi, Yixia; Xue, Lianqing; Zhang, Hui
2018-01-01
The purpose of the water resources comprehensive benefits analysis is to maximize the comprehensive benefits on the aspects of social, economic and ecological environment. Aiming at the defects of the traditional analytic hierarchy process in the evaluation of water resources, it proposed a comprehensive benefit evaluation of social, economic and environmental benefits index from the perspective of water resources comprehensive benefit in the social system, economic system and environmental system; determined the index weight by the improved fuzzy analytic hierarchy process (AHP), calculated the relative index of water resources comprehensive benefit and analyzed the comprehensive benefit of water resources in Xiangshui County by the multi-objective evaluation model. Based on the water resources data in Xiangshui County, 20 main comprehensive benefit assessment factors of 5 districts belonged to Xiangshui County were evaluated. The results showed that the comprehensive benefit of Xiangshui County was 0.7317, meanwhile the social economy has a further development space in the current situation of water resources.
Automated processing of zebrafish imaging data: a survey.
Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-09-01
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Automated Processing of Zebrafish Imaging Data: A Survey
Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-01-01
Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125
Capturing the 'ome': the expanding molecular toolbox for RNA and DNA library construction.
Boone, Morgane; De Koker, Andries; Callewaert, Nico
2018-04-06
All sequencing experiments and most functional genomics screens rely on the generation of libraries to comprehensively capture pools of targeted sequences. In the past decade especially, driven by the progress in the field of massively parallel sequencing, numerous studies have comprehensively assessed the impact of particular manipulations on library complexity and quality, and characterized the activities and specificities of several key enzymes used in library construction. Fortunately, careful protocol design and reagent choice can substantially mitigate many of these biases, and enable reliable representation of sequences in libraries. This review aims to guide the reader through the vast expanse of literature on the subject to promote informed library generation, independent of the application.
Beyond MEDLINE for literature searches.
Conn, Vicki S; Isaramalai, Sang-arun; Rath, Sabyasachi; Jantarakupt, Peeranuch; Wadhawan, Rohini; Dash, Yashodhara
2003-01-01
To describe strategies for a comprehensive literature search. MEDLINE searches result in limited numbers of studies that are often biased toward statistically significant findings. Diversified search strategies are needed. Empirical evidence about the recall and precision of diverse search strategies is presented. Challenges and strengths of each search strategy are identified. Search strategies vary in recall and precision. Often sensitivity and specificity are inversely related. Valuable search strategies include examination of multiple diverse computerized databases, ancestry searches, citation index searches, examination of research registries, journal hand searching, contact with the "invisible college," examination of abstracts, Internet searches, and contact with sources of synthesized information. Extending searches beyond MEDLINE enables researchers to conduct more systematic comprehensive searches.
Improving Reading Comprehension Using Digital Text: A Meta-Analysis of Interventions
ERIC Educational Resources Information Center
Berkeley, Sheri; Kurz, Leigh Ann; Boykin, Andrea; Evmenova, Anya S.
2015-01-01
Much is known about how to improve students' comprehension when reading printed text; less is known about outcomes when reading digital text. The purpose of this meta-analysis was to analyze research on the impact of digital text interventions. A comprehensive literature search resulted in 27 group intervention studies with 16,513 participants.…
Araujo, Luiz H.; Timmers, Cynthia; Bell, Erica Hlavin; Shilo, Konstantin; Lammers, Philip E.; Zhao, Weiqiang; Natarajan, Thanemozhi G.; Miller, Clinton J.; Zhang, Jianying; Yilmaz, Ayse S.; Liu, Tom; Coombes, Kevin; Amann, Joseph; Carbone, David P.
2015-01-01
Purpose Technologic advances have enabled the comprehensive analysis of genetic perturbations in non–small-cell lung cancer (NSCLC); however, African Americans have often been underrepresented in these studies. This ethnic group has higher lung cancer incidence and mortality rates, and some studies have suggested a lower incidence of epidermal growth factor receptor mutations. Herein, we report the most in-depth molecular profile of NSCLC in African Americans to date. Methods A custom panel was designed to cover the coding regions of 81 NSCLC-related genes and 40 ancestry-informative markers. Clinical samples were sequenced on a massively parallel sequencing instrument, and anaplastic lymphoma kinase translocation was evaluated by fluorescent in situ hybridization. Results The study cohort included 99 patients (61% males, 94% smokers) comprising 31 squamous and 68 nonsquamous cell carcinomas. We detected 227 nonsilent variants in the coding sequence, including 24 samples with nonoverlapping, classic driver alterations. The frequency of driver mutations was not significantly different from that of whites, and no association was found between genetic ancestry and the presence of somatic mutations. Copy number alteration analysis disclosed distinguishable amplifications in the 3q chromosome arm in squamous cell carcinomas and pointed toward a handful of targetable alterations. We also found frequent SMARCA4 mutations and protein loss, mostly in driver-negative tumors. Conclusion Our data suggest that African American ancestry may not be significantly different from European/white background for the presence of somatic driver mutations in NSCLC. Furthermore, we demonstrated that using a comprehensive genotyping approach could identify numerous targetable alterations, with potential impact on therapeutic decisions. PMID:25918285
Huang, Jingshan; Eilbeck, Karen; Smith, Barry; Blake, Judith A; Dou, Dejing; Huang, Weili; Natale, Darren A; Ruttenberg, Alan; Huan, Jun; Zimmermann, Michael T; Jiang, Guoqian; Lin, Yu; Wu, Bin; Strachan, Harrison J; He, Yongqun; Zhang, Shaojie; Wang, Xiaowei; Liu, Zixing; Borchert, Glen M; Tan, Ming
2016-01-01
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/ncro.owl.
Baiocchi, Claudio; Medana, Claudio; Giancotti, Valeria; Aigotti, Riccardo; Dal Bello, Frederica; Massolino, Cristina; Gastaldi, Daniela; Grandi, Maurizio
2013-01-01
The many effects of the African medicinal herb Desmodium adscendens were studied in the 1980s and 1990s. In spite of this, a comprehensive analytical protocol for the quality control of its constituents (soyasaponins, alkaloids and flavonoids) has not yet been formulated and reported. This study deals with the optimization of extraction conditions from the plant and qualitative identification of the constituents by HPLC-diode array UV and multistage mass spectrometry. Plant constituents were extracted from leaves by liquid-liquid and solid matrix dispersion extraction. Separation was achieved via RP-C18 liquid chromatographywith UV and MS(n) detection and mass spectrometry analysis was conducted by electrospray ionization ion trap or orbitrap mass spectrometry. High resolution mass spectrometry (HRMS) was used for structural identification of active molecules relating to soyasaponins and alkaloids. The flavonoid fragmentations were preliminarily studied by HRMS in order to accurately characterize the more common neutral losses. However, the high number of isomeric species induced us to make recourse to a more extended chromatographic separation in order to enable useful tandem mass spectrometry and ultraviolet spectral interpretation to propose a reasonable chemical classification of these polyphenols. 35 compounds of this class were identified herein with respect to the five reported in literature in this way we made up a comprehensive protocol for the qualitative analysis of the high complexity content of this plant. This result paves the way for both reliable quality control of potential phytochemical medicaments and possible future systematic clinical studies.
Does the microbiome and virome contribute to myalgic encephalomyelitis/chronic fatigue syndrome?
Newberry, Fiona; Hsieh, Shen-Yuan; Wileman, Tom; Carding, Simon R
2018-03-15
Myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS) (ME/CFS) is a disabling and debilitating disease of unknown aetiology. It is a heterogeneous disease characterized by various inflammatory, immune, viral, neurological and endocrine symptoms. Several microbiome studies have described alterations in the bacterial component of the microbiome (dysbiosis) consistent with a possible role in disease development. However, in focusing on the bacterial components of the microbiome, these studies have neglected the viral constituent known as the virome. Viruses, particularly those infecting bacteria (bacteriophages), have the potential to alter the function and structure of the microbiome via gene transfer and host lysis. Viral-induced microbiome changes can directly and indirectly influence host health and disease. The contribution of viruses towards disease pathogenesis is therefore an important area for research in ME/CFS. Recent advancements in sequencing technology and bioinformatics now allow more comprehensive and inclusive investigations of human microbiomes. However, as the number of microbiome studies increases, the need for greater consistency in study design and analysis also increases. Comparisons between different ME/CFS microbiome studies are difficult because of differences in patient selection and diagnosis criteria, sample processing, genome sequencing and downstream bioinformatics analysis. It is therefore important that microbiome studies adopt robust, reproducible and consistent study design to enable more reliable and valid comparisons and conclusions to be made between studies. This article provides a comprehensive review of the current evidence supporting microbiome alterations in ME/CFS patients. Additionally, the pitfalls and challenges associated with microbiome studies are discussed. © 2018 The Author(s).
Soussan, Christophe; Andersson, Martin; Kjellgren, Anette
2018-02-01
The increasing number of legally ambiguous and precarious Novel Psychoactive Substances (NPS) constitutes a challenge for policy makers and public health. Scientific and more in-depth knowledge about the motivations for using NPS is scarce and often consist of predetermined, non-systematic, or poorly described reasons deduced from top-down approaches. Therefore, the aim of the present study was to explore and characterize the users' self-reported reasons for NPS use inductively and more comprehensively. The self-reported reasons of a self-selected sample of 613 international NPS users were collected via an online survey promoted at the international drug discussion forum bluelight.org and later analyzed qualitatively using inductive thematic analysis. The analysis showed that the participants used NPS because these compounds reportedly: 1) enabled safer and more convenient drug use, 2) satisfied a curiosity and interest about the effects, 3) facilitated a novel and exciting adventure, 4) promoted self-exploration and personal growth, 5) functioned as coping agents, 6) enhanced abilities and performance, 7) fostered social bonding and belonging, and 8) acted as a means for recreation and pleasure. The consumption of NPS was also driven by 9) problematic and unintentional use. The present study contributed to a more comprehensive understanding of the users' own and self-reported reasons for using NPS, which needs to be acknowledged not only in order to minimize drug related harm and drug user alienation but also to improve prevention efforts and reduce the potentially counter-intuitive effects of strictly prohibitive policies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Einstein, Daniel R.; Del Pin, Facundo; Jiao, Xiangmin; Kuprat, Andrew P.; Carson, James P.; Kunzelman, Karyn S.; Cochran, Richard P.; Guccione, Julius M.; Ratcliffe, Mark B.
2009-01-01
SUMMARY The remodeling that occurs after a posterolateral myocardial infarction can alter mitral valve function by creating conformational abnormalities in the mitral annulus and in the posteromedial papillary muscle, leading to mitral regurgitation (MR). It is generally assumed that this remodeling is caused by a volume load and is mediated by an increase in diastolic wall stress. Thus, mitral regurgitation can be both the cause and effect of an abnormal cardiac stress environment. Computational modeling of ischemic MR and its surgical correction is attractive because it enables an examination of whether a given intervention addresses the correction of regurgitation (fluid-flow) at the cost of abnormal tissue stress. This is significant because the negative effects of an increased wall stress due to the intervention will only be evident over time. However, a meaningful fluid-structure interaction model of the left heart is not trivial; it requires a careful characterization of the in-vivo cardiac geometry, tissue parameterization though inverse analysis, a robust coupled solver that handles collapsing Lagrangian interfaces, automatic grid-generation algorithms that are capable of accurately discretizing the cardiac geometry, innovations in image analysis, competent and efficient constitutive models and an understanding of the spatial organization of tissue microstructure. In this manuscript, we profile our work toward a comprehensive fluid-structure interaction model of the left heart by reviewing our early work, presenting our current work and laying out our future work in four broad categories: data collection, geometry, fluid-structure interaction and validation. PMID:20454531
Silagi, Marcela Lima; Rabelo, Camila Maia; Schochat, Eliane; Mansur, Letícia Lessa
2017-11-13
To analyze the effect of education on sentence listening comprehension on cognitively healthy elderly. A total of 111 healthy elderly, aged 60-80 years of both genders were divided into two groups according to educational level: low education (0-8 years of formal education) and high education (≥9 years of formal education). The participants were assessed using the Revised Token Test, an instrument that supports the evaluation of auditory comprehension of orders with different working memory and syntactic complexity demands. The indicators used for performance analysis were the number of correct responses (accuracy analysis) and task execution time (temporal analysis) in the different blocks. The low educated group had a lower number of correct responses than the high educated group on all blocks of the test. In the temporal analysis, participants with low education had longer execution time for commands on the first four blocks related to working memory. However, the two groups had similar execution time for blocks more related to syntactic comprehension. Education influenced sentence listening comprehension on elderly. Temporal analysis allowed to infer over the relationship between comprehension and other cognitive abilities, and to observe that the low educated elderly did not use effective compensation strategies to improve their performances on the task. Therefore, low educational level, associated with aging, may potentialize the risks for language decline.
Advancements in web-database applications for rabies surveillance.
Rees, Erin E; Gendron, Bruno; Lelièvre, Frédérick; Coté, Nathalie; Bélanger, Denise
2011-08-02
Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include (1) automatic integration of multi-agency data and diagnostic results on a daily basis; (2) a web-based data editing interface that enables authorized users to add, edit and extract data; and (3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response.
Advancements in web-database applications for rabies surveillance
2011-01-01
Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1) automatic integration of multi-agency data and diagnostic results on a daily basis; 2) a web-based data editing interface that enables authorized users to add, edit and extract data; and 3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response. PMID:21810215
High-Resolution Water Footprints of Production of the United States
NASA Astrophysics Data System (ADS)
Marston, Landon; Ao, Yufei; Konar, Megan; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.
2018-03-01
The United States is the largest producer of goods and services in the world. Rainfall, surface water supplies, and groundwater aquifers represent a fundamental input to economic production. Despite the importance of water resources to economic activity, we do not have consistent information on water use for specific locations and economic sectors. A national, spatially detailed database of water use by sector would provide insight into U.S. utilization and dependence on water resources for economic production. To this end, we calculate the water footprint of over 500 food, energy, mining, services, and manufacturing industries and goods produced in the United States. To do this, we employ a data intensive approach that integrates water footprint and input-output techniques into a novel methodological framework. This approach enables us to present the most detailed and comprehensive water footprint analysis of any country to date. This study broadly contributes to our understanding of water in the U.S. economy, enables supply chain managers to assess direct and indirect water dependencies, and provides opportunities to reduce water use through benchmarking. In fact, we find that 94% of U.S. industries could reduce their total water footprint more by sourcing from more water-efficient suppliers in their supply chain than they could by converting their own operations to be more water-efficient.
NASA Astrophysics Data System (ADS)
Börries, S.; Metz, O.; Pranzas, P. K.; Bellosta von Colbe, J. M.; Bücherl, T.; Dornheim, M.; Klassen, T.; Schreyer, A.
2016-10-01
For the storage of hydrogen, complex metal hydrides are considered as highly promising with respect to capacity, reversibility and safety. The optimization of corresponding storage tanks demands a precise and time-resolved investigation of the hydrogen distribution in scaled-up metal hydride beds. In this study it is shown that in situ fission Neutron Radiography provides unique insights into the spatial distribution of hydrogen even for scaled-up compacts and therewith enables a direct study of hydrogen storage tanks. A technique is introduced for the precise quantification of both time-resolved data and a priori material distribution, allowing inter alia for an optimization of compacts manufacturing process. For the first time, several macroscopic fields are combined which elucidates the great potential of Neutron Imaging for investigations of metal hydrides by going further than solely 'imaging' the system: A combination of in-situ Neutron Radiography, IR-Thermography and thermodynamic quantities can reveal the interdependency of different driving forces for a scaled-up sodium alanate pellet by means of a multi-correlation analysis. A decisive and time-resolved, complex influence of material packing density is derived. The results of this study enable a variety of new investigation possibilities that provide essential information on the optimization of future hydrogen storage tanks.
Synthetic spike-in standards for high-throughput 16S rRNA gene amplicon sequencing
Tourlousse, Dieter M.; Yoshiike, Satowa; Ohashi, Akiko; Matsukura, Satoko; Noda, Naohiro
2017-01-01
Abstract High-throughput sequencing of 16S rRNA gene amplicons (16S-seq) has become a widely deployed method for profiling complex microbial communities but technical pitfalls related to data reliability and quantification remain to be fully addressed. In this work, we have developed and implemented a set of synthetic 16S rRNA genes to serve as universal spike-in standards for 16S-seq experiments. The spike-ins represent full-length 16S rRNA genes containing artificial variable regions with negligible identity to known nucleotide sequences, permitting unambiguous identification of spike-in sequences in 16S-seq read data from any microbiome sample. Using defined mock communities and environmental microbiota, we characterized the performance of the spike-in standards and demonstrated their utility for evaluating data quality on a per-sample basis. Further, we showed that staggered spike-in mixtures added at the point of DNA extraction enable concurrent estimation of absolute microbial abundances suitable for comparative analysis. Results also underscored that template-specific Illumina sequencing artifacts may lead to biases in the perceived abundance of certain taxa. Taken together, the spike-in standards represent a novel bioanalytical tool that can substantially improve 16S-seq-based microbiome studies by enabling comprehensive quality control along with absolute quantification. PMID:27980100