Sharing behavioral data through a grid infrastructure using data standards
Min, Hua; Ohira, Riki; Collins, Michael A; Bondy, Jessica; Avis, Nancy E; Tchuvatkina, Olga; Courtney, Paul K; Moser, Richard P; Shaikh, Abdul R; Hesse, Bradford W; Cooper, Mary; Reeves, Dianne; Lanese, Bob; Helba, Cindy; Miller, Suzanne M; Ross, Eric A
2014-01-01
Objective In an effort to standardize behavioral measures and their data representation, the present study develops a methodology for incorporating measures found in the National Cancer Institute's (NCI) grid-enabled measures (GEM) portal, a repository for behavioral and social measures, into the cancer data standards registry and repository (caDSR). Methods The methodology consists of four parts for curating GEM measures into the caDSR: (1) develop unified modeling language (UML) models for behavioral measures; (2) create common data elements (CDE) for UML components; (3) bind CDE with concepts from the NCI thesaurus; and (4) register CDE in the caDSR. Results UML models have been developed for four GEM measures, which have been registered in the caDSR as CDE. New behavioral concepts related to these measures have been created and incorporated into the NCI thesaurus. Best practices for representing measures using UML models have been utilized in the practice (eg, caDSR). One dataset based on a GEM-curated measure is available for use by other systems and users connected to the grid. Conclusions Behavioral and population science data can be standardized by using and extending current standards. A new branch of CDE for behavioral science was developed for the caDSR. It expands the caDSR domain coverage beyond the clinical and biological areas. In addition, missing terms and concepts specific to the behavioral measures addressed in this paper were added to the NCI thesaurus. A methodology was developed and refined for curation of behavioral and population science data. PMID:24076749
Sharing behavioral data through a grid infrastructure using data standards.
Min, Hua; Ohira, Riki; Collins, Michael A; Bondy, Jessica; Avis, Nancy E; Tchuvatkina, Olga; Courtney, Paul K; Moser, Richard P; Shaikh, Abdul R; Hesse, Bradford W; Cooper, Mary; Reeves, Dianne; Lanese, Bob; Helba, Cindy; Miller, Suzanne M; Ross, Eric A
2014-01-01
In an effort to standardize behavioral measures and their data representation, the present study develops a methodology for incorporating measures found in the National Cancer Institute's (NCI) grid-enabled measures (GEM) portal, a repository for behavioral and social measures, into the cancer data standards registry and repository (caDSR). The methodology consists of four parts for curating GEM measures into the caDSR: (1) develop unified modeling language (UML) models for behavioral measures; (2) create common data elements (CDE) for UML components; (3) bind CDE with concepts from the NCI thesaurus; and (4) register CDE in the caDSR. UML models have been developed for four GEM measures, which have been registered in the caDSR as CDE. New behavioral concepts related to these measures have been created and incorporated into the NCI thesaurus. Best practices for representing measures using UML models have been utilized in the practice (eg, caDSR). One dataset based on a GEM-curated measure is available for use by other systems and users connected to the grid. Behavioral and population science data can be standardized by using and extending current standards. A new branch of CDE for behavioral science was developed for the caDSR. It expands the caDSR domain coverage beyond the clinical and biological areas. In addition, missing terms and concepts specific to the behavioral measures addressed in this paper were added to the NCI thesaurus. A methodology was developed and refined for curation of behavioral and population science data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Karaa, Amel; Rahman, Shamima; Lombès, Anne; Yu-Wai-Man, Patrick; Sheikh, Muniza K; Alai-Hansen, Sherita; Cohen, Bruce H; Dimmock, David; Emrick, Lisa; Falk, Marni J; McCormack, Shana; Mirsky, David; Moore, Tony; Parikh, Sumit; Shoffner, John; Taivassalo, Tanja; Tarnopolsky, Mark; Tein, Ingrid; Odenkirchen, Joanne C; Goldstein, Amy
2017-05-01
The common data elements (CDE) project was developed by the National Institute of Neurological Disorders and Stroke (NINDS) to provide clinical researchers with tools to improve data quality and allow for harmonization of data collected in different research studies. CDEs have been created for several neurological diseases; the aim of this project was to develop CDEs specifically curated for mitochondrial disease (Mito) to enhance clinical research. Nine working groups (WGs), composed of international mitochondrial disease experts, provided recommendations for Mito clinical research. They initially reviewed existing NINDS CDEs and instruments, and developed new data elements or instruments when needed. Recommendations were organized, internally reviewed by the Mito WGs, and posted online for external public comment for a period of eight weeks. The final version was again reviewed by all WGs and the NINDS CDE team prior to posting for public use. The NINDS Mito CDEs and supporting documents are publicly available on the NINDS CDE website ( https://commondataelements.ninds.nih.gov/ ), organized into domain categories such as Participant/Subject Characteristics, Assessments, and Examinations. We developed a comprehensive set of CDE recommendations, data definitions, case report forms (CRFs), and guidelines for use in Mito clinical research. The widespread use of CDEs is intended to enhance Mito clinical research endeavors, including natural history studies, clinical trial design, and data sharing. Ongoing international collaboration will facilitate regular review, updates and online publication of Mito CDEs, and support improved consistency of data collection and reporting.
Controllable drug uptake and nongenomic response through estrogen-anchored cyclodextrin drug complex
Yin, Juan-Juan; Shumyak, Stepan P; Burgess, Christopher; Zhou, Zhi-Wei; He, Zhi-Xu; Zhang, Xue-Ji; Pan, Shu-Ting; Yang, Tian-Xin; Duan, Wei; Qiu, Jia-Xuan; Zhou, Shu-Feng
2015-01-01
Breast cancer is a leading killer of women worldwide. Cyclodextrin-based estrogen receptor-targeting drug-delivery systems represent a promising direction in cancer therapy but have rarely been investigated. To seek new targeting therapies for membrane estrogen receptor-positive breast cancer, an estrogen-anchored cyclodextrin encapsulating a doxorubicin derivative Ada-DOX (CDE1-Ada-DOX) has been synthesized and evaluated in human breast cancer MCF-7 cells. First, we synthesized estrone-conjugated cyclodextrin (CDE1), which formed the complex CDE1-Ada-DOX via molecular recognition with the derivative adamantane-doxorubicin (Ada-DOX) (Kd =1,617 M−1). The structure of the targeting vector CDE1 was fully characterized using 1H- and 13C-nuclear magnetic resonance, mass spectrometry, and electron microscopy. CDE1-Ada-DOX showed two-phase drug-release kinetics with much slower release than Ada-DOX. The fluorescence polarization analysis reveals that CDE1-Ada-DOX binds to recombinant human estrogen receptor α fragments with a Kd of 0.027 µM. Competition assay of the drug complex with estrogen ligands demonstrated that estrone and tamoxifen competed with CDE1-Ada-DOX for membrane estrogen receptor binding in MCF-7 cells. Intermolecular self-assembly of CDE1 molecules were observed, showing tail-in-bucket and wire-like structures confirmed by transmission electronic microscopy. CDE1-Ada-DOX had an unexpected lower drug uptake (when the host–guest ratio was >1) than non-targeting drugs in MCF-7 cells due to ensconced ligands in cyclodextrins cavities resulting from the intermolecular self-assembly. The uptake of CDE1-Ada-DOX was significantly increased when the host–guest ratio was adjusted to be less than half at the concentration of CDE1 over 5 µM due to the release of the estrone residues. CDE1 elicited rapid activation of mitogen-activated protein kinases (p44/42 MAPK, Erk1/2) in minutes through phosphorylation of Thr202/Tyr204 in MCF-7 cells. These results demonstrate a targeted therapeutics delivery of CDE1-Ada-DOX to breast cancer cells in a controlled manner and that the drug vector CDE1 can potentially be employed as a molecular tool to differentiate nongenomic from genomic mechanism. PMID:26251594
Vicentine, Fernando P P; Herbella, Fernando A M; Allaix, Marco E; Silva, Luciana C; Patti, Marco G
2014-02-01
Idiopathic achalasia (IA) and Chagas' disease esophagopathy (CDE) share several similarities. The comparison between IA and CDE is important to evaluate whether treatment options and their results can be accepted universally. High-resolution manometry (HRM) has proved a better diagnostic tool compared to conventional manometry. This study aims to evaluate HRM classifications for idiopathic achalasia in patients with CDE. We studied 98 patients: 52 patients with CDE (52 % females, mean age, 57 ± 14 years) and 46 patients with IA (54 % females; mean age 48 ± 19 years). All patients underwent a HRM and barium esophagogram. The Chicago classification was distributed in IA as Chicago I, 35 %; Chicago II, 63 %; and Chicago III, 2 %, and in CDE as Chicago I, 52 %; Chicago II, 48 %; and Chicago III, 0 % (p = 0.1, 0.1, and 0.5, respectively). All patients had the classic Rochester type. CDE patients had more pronounced degrees of esophageal dilatation (p < 0.002). The degree of esophageal dilatation did not correlate with Chicago classification (p = 0.08). In nine (9 %) patients, the HRM pattern changed during the test from Chicago I to II. Our results show that (a) HRM classifications for IA can be applied in patients with CDE and (b) HRM classifications did not correlate with the degree of esophageal dilatation. HRM classifications may reflect esophageal repletion and pressurization instead of muscular contraction. The correlation between manometric findings and treatment outcomes for CDE needs to be answered in the near future.
A Collaborative Decision Environment for UAV Operations
NASA Technical Reports Server (NTRS)
D'Ortenzio, Matthew V.; Enomoto, Francis Y.; Johan, Sandra L.
2005-01-01
NASA is developing Intelligent Mission Management (IMM) technology for science missions employing long endurance unmanned aerial vehicles (UAV's). The IMM groundbased component is the Collaborative Decision Environment (CDE), a ground system that provides the Mission/Science team with situational awareness, collaboration, and decisionmaking tools. The CDE is used for pre-flight planning, mission monitoring, and visualization of acquired data. It integrates external data products used for planning and executing a mission, such as weather, satellite data products, and topographic maps by leveraging established and emerging Open Geospatial Consortium (OGC) standards to acquire external data products via the Internet, and an industry standard geographic information system (GIs) toolkit for visualization As a Science/Mission team may be geographically dispersed, the CDE is capable of providing access to remote users across wide area networks using Web Services technology. A prototype CDE is being developed for an instrument checkout flight on a manned aircraft in the fall of 2005, in preparation for a full deployment in support of the US Forest Service and NASA Ames Western States Fire Mission in 2006.
Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre
The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): <90 to include low average, and developmental delay was considered with a cut-off<80 per domain. Results were analyzed based on the correlation of the CDE test and each domain from the BDI-2 and by subgroups of age. With a cut-off DDQ<90, 86.8% of tests with yellow result (CDE) indicated at least one domain affected and 50% 3 or more compared with 93.8% and 78.8% for red result, respectively. There were differences in every domain (P<0.001) for the percent of children with DDQ<80 between yellow and red result (CDE): cognitive 36.1% vs. 61.9%; communication: 27.8% vs. 50.4%, motor: 18.1% vs. 39.9%; personal-social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Common Data Elements for Muscle Biopsy Reporting
Dastgir, Jahannaz; Rutkowski, Anne; Alvarez, Rachel; Cossette, Stacy A.; Yan, Ke; Hoffmann, Raymond G.; Sewry, Caroline; Hayashi, Yukiko K.; Goebel, Hans-Hilmar; Bonnemann, Carsten; Lawlor, Michael W.
2016-01-01
Context There is no current standard among myopathologists for reporting muscle biopsy findings. The National Institute of Neurological Disorders and Stroke has recently launched a common data element (CDE) project to standardize neuromuscular data collected in clinical reports and to facilitate their use in research. Objective To develop a more-uniform, prospective reporting tool for muscle biopsies, incorporating the elements identified by the CDE project, in an effort to improve reporting and educational resources. Design The variation in current biopsy reporting practice was evaluated through a study of 51 muscle biopsy reports from self-reported diagnoses of genetically confirmed or undiagnosed muscle disease from the Congenital Muscle Disease International Registry. Two reviewers independently extracted data from deidentified reports and entered them into the revised CDE format to identify what was missing and whether or not information provided on the revised CDE report (complete/incomplete) could be successfully interpreted by a neuropathologist. Results Analysis of the data highlighted showed (1) inconsistent reporting of key clinical features from referring physicians, and (2) considerable variability in the reporting of pertinent positive and negative histologic findings by pathologists. Conclusions We propose a format for muscle-biopsy reporting that includes the elements in the CDE checklist and a brief narrative comment that interprets the data in support of a final interpretation. Such a format standardizes cataloging of pathologic findings across the spectrum of muscle diseases and serves emerging clinical care and research needs with the expansion of genetic-testing therapeutic trials. PMID:26132600
Villasís-Keever, Miguel Ángel; Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael; Mares-Serratos, Blanca Berenice; Martell-Valdez, Liliana; Sánchez-Velázquez, Olivia; Reyes-Morales, Hortensia; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Antillón-Ocampo, Fátima Adriana; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Baqueiro-Hernández, César Iván; Pizarro-Castellanos, Mariel; Martain-Pérez, Itzamara Jacqueline; Palma-Tavera, Josuha Alexander; Vargas-López, Guillermo; Muñoz-Hernández, Onofre
The Child Development Evaluation (CDE) test designed and validated in Mexico has been used as a screening tool for developmental problems in primary care facilities across Mexico. Heterogeneous results were found among those states where these were applied, despite using the same standardized training model for application. The objective was to evaluate a supervision model for quality of application of the CDE test at primary care facilities. A study was carried out in primary care facilities from three Mexican states to evaluate concordance of the results between supervisor and primary care personnel who administered the test using two different methods: direct observation (shadow study) or reapplication of the CDE test (consistency study). There were 380 shadow studies applied to 51 psychologists. General concordance of the shadow study was 86.1% according to the supervisor: green 94.5%, yellow 73.2% and red 80.0%. There were 302 re-test evaluations with a concordance of 88.1% (n=266): green 96.8%, yellow 71.7% and red 81.8%. There were no differences between CDE test subgroups by age. Both shadow and re-test study were adequate for the evaluation of the quality of the administration of the CDE Test and may be useful as a model of supervision in primary care facilities. The decision of which test to use relies on the availability of supervisors. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Anno, Takayuki; Higashi, Taishi; Motoyama, Keiichi; Hirayama, Fumitoshi; Uekama, Kaneto; Arima, Hidetoshi
2012-04-01
In this study, we evaluated the polyamidoamine starburst dendrimer (dendrimer, generation 2: G2) conjugate with 6-O-α-(4-O-α-D-glucuronyl)-D-glucosyl-β-cyclodextrin (GUG-β-CDE (G2)) as a gene transfer carrier. The in vitro gene transfer activity of GUG-β-CDE (G2, degree of substitution (DS) of cyclodextrin (CyD) 1.8) was remarkably higher than that of dendrimer (G2) conjugate with α-CyD (α-CDE (G2, DS 1.2)) and that with β-CyD(β-CDE (G2, DS 1.3)) in A549 and RAW264.7 cells. The particle size, ζ-potential, DNase I-catalyzed degradation, and cellular association of plasmid DNA (pDNA) complex with GUG-β-CDE (G2, DS 1.8) were almost the same as those of the other CDEs. Fluorescent-labeled GUG-β-CDE (G2, DS 1.8) localized in the nucleus 6 h after transfection of its pDNA complex in A549 cells, suggesting that nuclear localization of pDNA complex with GUG-β-CDE (G2, DS 1.8), at least in part, contributes to its high gene transfer activity. GUG-β-CDE (G2, DS 1.8) provided higher gene transfer activity than α-CDE (G2, DS 1.2) and β-CDE (G2, DS 1.3) in kidney with negligible changes in blood chemistry values 12 h after intravenous injection of pDNA complexes with GUG-β-CDE (G2, DS 1.8) in mice. In conclusion, the present findings suggest that GUG-β-CDE (G2, DS 1.8) has the potential for a novel polymeric pDNA carrier in vitro and in vivo.
Cihan, Abdullah; Birkholzer, Jens; Bianchi, Marco
2014-12-31
Large-scale pressure increases resulting from carbon dioxide (CO 2) injection in the subsurface can potentially impact caprock integrity, induce reactivation of critically stressed faults, and drive CO 2 or brine through conductive features into shallow groundwater. Pressure management involving the extraction of native fluids from storage formations can be used to minimize pressure increases while maximizing CO2 storage. However, brine extraction requires pumping, transportation, possibly treatment, and disposal of substantial volumes of extracted brackish or saline water, all of which can be technically challenging and expensive. This paper describes a constrained differential evolution (CDE) algorithm for optimal well placement andmore » injection/ extraction control with the goal of minimizing brine extraction while achieving predefined pressure contraints. The CDE methodology was tested for a simple optimization problem whose solution can be partially obtained with a gradient-based optimization methodology. The CDE successfully estimated the true global optimum for both extraction well location and extraction rate, needed for the test problem. A more complex example application of the developed strategy was also presented for a hypothetical CO 2 storage scenario in a heterogeneous reservoir consisting of a critically stressed fault nearby an injection zone. Through the CDE optimization algorithm coupled to a numerical vertically-averaged reservoir model, we successfully estimated optimal rates and locations for CO 2 injection and brine extraction wells while simultaneously satisfying multiple pressure buildup constraints to avoid fault activation and caprock fracturing. The study shows that the CDE methodology is a very promising tool to solve also other optimization problems related to GCS, such as reducing ‘Area of Review’, monitoring design, reducing risk of leakage and increasing storage capacity and trapping.« less
CDE-1 affects chromosome segregation through uridylation of CSR-1-bound siRNAs.
van Wolfswinkel, Josien C; Claycomb, Julie M; Batista, Pedro J; Mello, Craig C; Berezikov, Eugene; Ketting, René F
2009-10-02
We have studied the function of a conserved germline-specific nucleotidyltransferase protein, CDE-1, in RNAi and chromosome segregation in C. elegans. CDE-1 localizes specifically to mitotic chromosomes in embryos. This localization requires the RdRP EGO-1, which physically interacts with CDE-1, and the Argonaute protein CSR-1. We found that CDE-1 is required for the uridylation of CSR-1 bound siRNAs, and that in the absence of CDE-1 these siRNAs accumulate to inappropriate levels, accompanied by defects in both meiotic and mitotic chromosome segregation. Elevated siRNA levels are associated with erroneous gene silencing, most likely through the inappropriate loading of CSR-1 siRNAs into other Argonaute proteins. We propose a model in which CDE-1 restricts specific EGO-1-generated siRNAs to the CSR-1 mediated, chromosome associated RNAi pathway, thus separating it from other endogenous RNAi pathways. The conserved nature of CDE-1 suggests that similar sorting mechanisms may operate in other animals, including mammals.
Chen, Ming; Chen, Mindy
2010-01-01
Mean CDE (cumulative dissipated energy) values were compared for an open hospital- based surgical center and a free-standing surgical center. The same model of phacoemulsifier (Alcon Infiniti Ozil) was used. Mean CDE values showed that surgeons (individual private practice) at the free-standing surgical center were more efficient than surgeons (individual private practice) at the open hospital-based surgical center (mean CDE at the hospital-based surgical center 18.96 seconds [SD = 12.51]; mean CDE at the free-standing surgical center 13.2 seconds [SD = 9.5]). CDE can be used to monitor the efficiency of a cataract surgeon and surgical center in phacoemulsification. The CDE value may be used by institutions as one of the indicators for quality control and audit in phacoemulsification. PMID:21151334
Chen, Ming; Chen, Mindy
2010-11-12
Mean CDE (cumulative dissipated energy) values were compared for an open hospital- based surgical center and a free-standing surgical center. The same model of phacoemulsifier (Alcon Infiniti Ozil) was used. Mean CDE values showed that surgeons (individual private practice) at the free-standing surgical center were more efficient than surgeons (individual private practice) at the open hospital-based surgical center (mean CDE at the hospital-based surgical center 18.96 seconds [SD = 12.51]; mean CDE at the free-standing surgical center 13.2 seconds [SD = 9.5]). CDE can be used to monitor the efficiency of a cataract surgeon and surgical center in phacoemulsification. The CDE value may be used by institutions as one of the indicators for quality control and audit in phacoemulsification.
Begic, Sanela; Worobec, Elizabeth A
2008-05-01
Serratia marcescens is an important nosocomial agent having high antibiotic resistance. A major mechanism for S. marcescens antibiotic resistance is active efflux. To ascertain the substrate specificity of the S. marcescens SdeCDE efflux pump, we constructed pump gene deletion mutants. sdeCDE knockout strains showed no change in antibiotic susceptibility in comparison with the parental strains for any of the substrates, with the exception of novobiocin. In addition, novobiocin was the only antibiotic to be accumulated by sdeCDE-deficient strains. Based on the substrates used in our study, we conclude that SdeCDE is a Resistance-Nodulation-Cell Division family pump with limited substrate specificity.
2010-04-01
analytical community. 5.1 Towards a Common Understanding of CD&E and CD&E Project Management Recent developments within NATO have contributed to the... project management purposes it is useful to distinguish four phases [P 21]: a) Preparation, Initiation and Structuring; b) Concept Development Planning...examined in more detail below. While the NATO CD&E policy provides a benchmark for a comprehensive, disciplined management of CD&E projects , it may
Metabolism of 2-chloro-1,1-difluoroethene to glyoxylic and glycolic acid in rat hepatic microsomes.
Baker, M T; Vasquez, M T; Bates, J N; Chiang, C K
1990-01-01
The complete metabolic fate of the volatile anesthetic halothane is unclear since 2-chloro-1,1-diflurorethene (CDE), a reductive halothane metabolite, is known to readily release inorganic fluoride upon oxidation by cytochrome P-450. This study sought to clarify the metabolism of CDE by determining its metabolites and the roles of induce cytochrome P-450 forms in its metabolism. Upon incubation of [14C]CDE with rat hepatic microsomes, two major radioactive products were found which accounted for greater than 94% of the total metabolites. These compounds were determined to be the nonhalogenated compounds, glyoxylic and glycolic acids, which were formed in a ratio of approximately 1 to 2 of glyoxylic to glycolic acid. No other radioactive metabolites could be detected. Following incubation of CDE with hepatic microsomes isolated from rats treated with cytochrome P-450 inducers, measurement of fluoride release showed that phenobarbital induced CDE metabolism to the greatest degree at high CDE levels, isoniazid was the most effective inducer at low CDE concentrations, and beta-naphthoflavone was ineffective as an inducer. These results suggest that CDE biotransformation primarily involves the generation of an epoxide intermediate, which undergoes mechanisms of decay leading to total dehalogenation of the molecule, and that this metabolism is preferentially carried out by the phenobarbital- and ethanol-inducible forms of cytochrome P-450.
Solbrig, Harold R; Chute, Christopher G
2012-01-01
Objective The objective of this study is to develop an approach to evaluate the quality of terminological annotations on the value set (ie, enumerated value domain) components of the common data elements (CDEs) in the context of clinical research using both unified medical language system (UMLS) semantic types and groups. Materials and methods The CDEs of the National Cancer Institute (NCI) Cancer Data Standards Repository, the NCI Thesaurus (NCIt) concepts and the UMLS semantic network were integrated using a semantic web-based framework for a SPARQL-enabled evaluation. First, the set of CDE-permissible values with corresponding meanings in external controlled terminologies were isolated. The corresponding value meanings were then evaluated against their NCI- or UMLS-generated semantic network mapping to determine whether all of the meanings fell within the same semantic group. Results Of the enumerated CDEs in the Cancer Data Standards Repository, 3093 (26.2%) had elements drawn from more than one UMLS semantic group. A random sample (n=100) of this set of elements indicated that 17% of them were likely to have been misclassified. Discussion The use of existing semantic web tools can support a high-throughput mechanism for evaluating the quality of large CDE collections. This study demonstrates that the involvement of multiple semantic groups in an enumerated value domain of a CDE is an effective anchor to trigger an auditing point for quality evaluation activities. Conclusion This approach produces a useful quality assurance mechanism for a clinical study CDE repository. PMID:22511016
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
75 FR 44053 - Proposed Collection; Comment Request: CDFI/CDE Project Profiles Web Form
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
...; Comment Request: CDFI/CDE Project Profiles Web Form ACTION: Notice and request for comments. SUMMARY: The... the Treasury, is soliciting comments concerning the CDFI/CDE Project Profile Web Form, a voluntary... CIIS. The CDFI Fund plans to use the descriptions in CDFI Fund publications, on its Web site and in...
[Impact of a training model for the Child Development Evaluation Test in primary care].
Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael; Cruz-Ortiz, Leopoldo Alfonso; Baqueiro-Hernández, César Iván; Martain-Pérez, Itzamara Jacqueline; Palma-Tavera, Josuha Alexander; Villasís-Keever, Miguel Ángel; Reyes-Morales, Hortensia; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Antillón-Ocampo, Fátima Adriana; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Vargas-López, Guillermo; Muñoz-Hernández, Onofre
The Child Development Evaluation (CDE) Test is a screening tool designed and validated in Mexico for the early detection of child developmental problems. For professionals who will be administering the test in primary care facilities, previous acquisition of knowledge about the test is required in order to generate reliable results. The aim of this work was to evaluate the impact of a training model for primary care workers from different professions through the comparison of knowledge acquired during the training course. The study design was a before/after type considering the participation in a training course for the CDE test as the intervention. The course took place in six different Mexican states from October to December 2013. The same questions were used before and after. There were 394 participants included. Distribution according to professional profile was as follows: general physicians 73.4%, nursing 7.7%, psychology 7.1%, nutrition 6.1% and other professions 5.6%. The questions with the lowest correct answer rates were associated with the scoring of the CDE test. In the initial evaluation, 64.9% obtained a grade lower than 20 compared with 1.8% in the final evaluation. In the initial evaluation only 1.8% passed compared with 75.15% in the final evaluation. The proposed model allows the participants to acquire general knowledge about the CDE Test. To improve the general results in future training courses, it is required to reinforce during training the scoring and interpretation of the test together with the previous lecture of the material by the participants. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Rao, Bola Sadashiva Satish; Upadhya, Dinesh; Adiga, Satish Kumar
2008-01-01
The radiomodulatory potential of hydroalcoholic extract of a medicinal plant Cynodon dactylon (family: Poaceae) against radiation-induced cytogenetic damage was analyzed using Chinese hamster lung fibroblast (V79) cells and human peripheral blood lymphocytes (HPBLs) growing in vitro. Induction of micronuclei was used as an index of cytogenetic damage, evaluated in cytokinesis blocked binucleate cells. The hydroalcoholic Cynodon dactylon extract (CDE) rendered protection against the radiation-induced DNA damage, as evidenced by the significant (p<0.001) reduction in micronucleated binucleate cells (MNBNC%) after various doses of CDE treatment in V79 cells and HPBLs. The optimum dose of CDE (40 and 50 microg/ml in HPBLs and V79 cells, respectively) with the greatest reduction in micronuclei was further used in combination with various doses of gamma radiation (0.5, 1, 2, 3, and 4 Gy) exposed 1 h after CDE treatment. A linear dose-dependent MNBNC% increase in radiation alone group was observed, while 40/50 microg/ml CDE significantly resulted in the reduction of MNBNC%, compared to the respective radiation alone groups. CDE resulted in a dose-dependent increase in free radical scavenging ability against various free radicals, viz., 2, 2-diphenyl-2-picryl-hydrazyl (DPPH); 2, 2-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS); superoxide anion (O2*-); hydroxyl radical (OH*) and nitric oxide radical (NO*) generated in vitro. Also, an excellent (70%) inhibition of lipid peroxidation in vitro was observed at a dose of 300 microg/ml CDE, attaining the saturation point at higher doses. The present findings demonstrated the radioprotective effect of CDE, also rendering protection against radiation-induced genomic instability and DNA damage. The observed radioprotective effect may be partly attributed to the free radical scavenging and antilipid peroxidative potential of CDE.
Sebai, Hichem; Jabri, Mohamed-Amine; Souli, Abdelaziz; Hosni, Karim; Rtibi, Kais; Tebourbi, Olfa; El-Benna, Jamel; Sakly, Mohsen
2015-07-01
The present study assessed the chemical composition, antioxidant properties, and hepatoprotective effects of subacute pre-treatment with chamomile (Matricaria recutita L.) decoction extract (CDE) against ethanol (EtOH)-induced oxidative stress in rats. The colorimetric analysis demonstrated that the CDE is rich in total polyphenols, total flavonoids and condensed tannins, and exhibited an important in vitro antioxidant activity. The use of LC/MS technique allowed us to identify 10 phenolic compounds in CDE. We found that CDE pretreatment, in vivo, protected against EtOH-induced liver injury evident by plasma transaminases activity and preservation of the hepatic tissue structure. The CDE counteracted EtOH-induced liver lipoperoxidation, preserved thiol -SH groups and prevented the depletion of antioxidant enzyme activity of superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx). We also showed that acute alcohol administration increased tissue and plasma hydrogen peroxide (H(2)O(2)), calcium and free iron levels. More importantly, CDE pre-treatment reversed all EtOH-induced disturbances in intracellular mediators. In conclusion, our data suggest that CDE exerted a potential hepatoprotective effect against EtOH-induced oxidative stress in rat, at least in part, by negatively regulating Fenton reaction components such as H(2)O(2) and free iron, which are known to lead to cytotoxicity mediated by intracellular calcium deregulation.
Yesilirmak, Nilufer; Diakonis, Vasilios F; Sise, Adam; Waren, Daniel P; Yoo, Sonia H; Donaldson, Kendall E
2017-01-01
To compare the mean cumulative dissipated energy (CDE) in patients having femtosecond laser-assisted or conventional phacoemulsification cataract surgery using 2 different phacoemulsification platforms. Bascom Palmer Eye Institute, Miami, Florida, USA. Prospective comparative nonrandomized clinical study. Consecutive patients were scheduled to have femtosecond laser-assisted cataract surgery with the Lensx laser or conventional phacoemulsification using an active-fluidics torsional platform (Centurion) or torsional platform (Infiniti). The mean CDE and cataract grade were recorded. The study comprised 570 eyes (570 patients). There was no statistically significant difference in mean age (P = .41, femtosecond group; P = .33, conventional group) or cataract grade (P = .78 and P = .45, respectively) between the active-fluidics and gravity-fluidics platforms. In femtosecond cases (145 eyes), the mean CDE (percent-seconds) was 5.18 ± 4.58 (SD) with active fluidics and 7.00 ± 6.85 with gravity fluidics; in conventional cases (425 eyes), the mean CDE was 7.77 ± 6.97 and 11.43 ± 9.12, respectively. In both femtosecond cases and conventional cases, the CDE was lower with the active-fluidics platform than with the gravity-fluidics platform (P = .029, femtosecond group; P < .001 conventional group). With both fluidics platforms, the mean CDE was significantly lower in the femtosecond group than in the conventional group (both P < .001). The active-fluidics phacoemulsification platform achieved lower CDE values than the gravity-fluidics platform for conventional cataract extraction. Femtosecond laser pretreatment with the active-fluidics platform further reduced CDE. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Rizzoli-Córdoba, Antonio; Martell-Valdez, Liliana; Delgado-Ginebra, Ismael; Villasís-Keever, Miguel Ángel; Reyes-Morales, Hortensia; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Vargas-López, Guillermo; Muñoz-Hernández, Onofre
Evaluación del Desarrollo Infantil or Child Development Evaluation (CDE) test, a screening tool designed and validated in Mexico, classifies child development as normal (green) or abnormal (developmental lag or yellow and risk of delay or red). Population-based results of child development level with this tool are not known. The objective of this work was to evaluate the developmental level of children aged 1-59 months living in poverty (PROSPERA program beneficiaries) through application of the CDE test. CDE tests were applied by specifically trained and standardized personnel to children <5 years old who attended primary care facilities for a scheduled appointment for nutrition, growth and development evaluation from November 2013 to May 2014. There were 5,527 children aged 1-59 months who were evaluated; 83.8% (n=4,632) were classified with normal development (green) and 16.2% (n=895) as abnormal: 11.9% (n=655) as yellow and 4.3% (n=240) as red. The proportion of abnormal results was 9.9% in children <1 year of age compared with 20.8% at 4 years old. The most affected areas according to age were language at 2 years (9.35%) and knowledge at 4 years old (11.1%). Gross motor and social areas were more affected in children from rural areas; fine motor skills, language and knowledge were more affected in males. The proportion of children with abnormal results is similar to other population-based studies. The highest rate in older children reinforces the need for an early-based intervention. The different pattern of areas affected between urban and rural areas suggests the need for a differentiated intervention. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Canto: an online tool for community literature curation.
Rutherford, Kim M; Harris, Midori A; Lock, Antonia; Oliver, Stephen G; Wood, Valerie
2014-06-15
Detailed curation of published molecular data is essential for any model organism database. Community curation enables researchers to contribute data from their papers directly to databases, supplementing the activity of professional curators and improving coverage of a growing body of literature. We have developed Canto, a web-based tool that provides an intuitive curation interface for both curators and researchers, to support community curation in the fission yeast database, PomBase. Canto supports curation using OBO ontologies, and can be easily configured for use with any species. Canto code and documentation are available under an Open Source license from http://curation.pombase.org/. Canto is a component of the Generic Model Organism Database (GMOD) project (http://www.gmod.org/). © The Author 2014. Published by Oxford University Press.
Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew
2009-01-01
Cumulative dissipated energy (CDE) was used with Infiniti((R)) Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti((R)) unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons.
Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew
2009-01-01
Cumulative dissipated energy (CDE) was used with Infiniti® Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti® unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons. PMID:19688027
A Qualitative Assessment of the Practice Experiences of Certified Diabetes Educator Pharmacists.
Alzahrani, Fahad; Taylor, Jeff; Perepelkin, Jason; Mansell, Kerry
2015-08-01
To describe the practice experiences of Certified Diabetes Educator (CDE) pharmacists in Saskatchewan and determine what impact the CDE designation has had on their personal practices. A qualitative research approach was used. All pharmacists in Saskatchewan were e-mailed about the study, and eventually, a purposive sampling method was used to select a range of CDE pharmacists. Semistructured, in-person interviews were performed. An interview guide was developed to assess the work activities performed, the benefits of becoming a CDE and the challenges and resultant solutions that optimize their CDE designations. All interviews were transcribed verbatim and coded using deductive thematic analysis to identify the main themes that described the experiences of respondents, with the aid of QSR NVivo. A total of 14 CDE pharmacists from various communities and work settings chose to participate. All of the participants indicated they were engaging in increased diabetes-related activities since becoming CDEs. All participants indicated they were happy with their decisions to become CDEs and described numerous benefits as a direct result of achieving this designation. Although some solutions were offered, participants still face challenges in optimizing their role as CDEs, such as devoting enough time to diabetes management and remuneration for providing diabetes services. CDE pharmacists in Saskatchewan report performing enhanced diabetes-related activities subsequent to becoming CDEs and that obtaining this designation has had a positive impact on their personal practices. A larger, cross-country study is necessary to determine whether these results are consistent amongst all pharmacists in Canada. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.
Eckhard, A; Müller, M; Salt, A; Smolders, J; Rask-Andersen, H; Löwenheim, H
2014-10-01
The cochlear duct epithelium (CDE) constitutes a tight barrier that effectively separates the inner ear fluids, endolymph and perilymph, thereby maintaining distinct ionic and osmotic gradients that are essential for auditory function. However, in vivo experiments have demonstrated that the CDE allows for rapid water exchange between fluid compartments. The molecular mechanism governing water permeation across the CDE remains elusive. We computationally determined the diffusional (PD) and osmotic (Pf) water permeability coefficients for the mammalian CDE based on in silico simulations of cochlear water dynamics integrating previously derived in vivo experimental data on fluid flow with expression sites of molecular water channels (aquaporins, AQPs). The PD of the entire CDE (PD = 8.18 × 10(-5) cm s(-1)) and its individual partitions including Reissner's membrane (PD = 12.06 × 10(-5) cm s(-1)) and the organ of Corti (PD = 10.2 × 10(-5) cm s(-1)) were similar to other epithelia with AQP-facilitated water permeation. The Pf of the CDE (Pf = 6.15 × 10(-4) cm s(-1)) was also in the range of other epithelia while an exceptionally high Pf was determined for an epithelial subdomain of outer sulcus cells in the cochlear apex co-expressing AQP4 and AQP5 (OSCs; Pf = 156.90 × 10(-3) cm s(-1)). The Pf/PD ratios of the CDE (Pf/PD = 7.52) and OSCs (Pf/PD = 242.02) indicate an aqueous pore-facilitated water exchange and reveal a high-transfer region or "water shunt" in the cochlear apex. This "water shunt" explains experimentally determined phenomena of endolymphatic longitudinal flow towards the cochlear apex. The water permeability coefficients of the CDE emphasise the physiological and pathophysiological relevance of water dynamics in the cochlea in particular for endolymphatic hydrops and Ménière's disease.
Can we replace curation with information extraction software?
Karp, Peter D
2016-01-01
Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.
Jabri, Mohamed-Amine; Sakly, Mohsen; Marzouki, Lamjed; Sebai, Hichem
2017-03-01
The present study aimed to investigate the inhibitory effect of chamomile decoction extract (CDE) on intestinal glucose absorption as well as its protective role against high fat diet (HFD)-induced obesity and lipotoxicity in rats. We used the Ussing chamber system to investigate the effect of CDE on intestinal transport of glucose. Male Wistar rats were fed HFD for six weeks to provoke obesity. CDE (100mg/kg, b.w. p.o.) has been per orally administered to HFD fed rats. Ex vivo, we found that CDE significantly and dose-dependently increased intestinal absorption of glucose. In vivo, HFD increased the body, liver and kidney weights, while CDE treatment showed a significant protective effects. High fat diet induced also a lipid profiles disorder and a disturbances in kidney and liver function parameters. Moreover liver and kidney lipotoxicity is accompanied by an oxidative stress status characterized by increased lipoperoxidation, depletion of antioxidant enzymes activity and non-enzymatic antioxidant (-SH groups and GSH) levels as well as increased levels of free iron, hydrogen peroxide (H 2 O 2 ) and calcium. However, treatment with CDE alleviated all the deleterious effects of HFD feed. These findings suggest that chamomile decoction extract can be used as functional beverage against obesity, hyperglycemia and hyperlipidemia. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Rizzoli-Córdoba, Antonio; Campos-Maldonado, Martha Carmen; Vélez-Andrade, Víctor Hugo; Delgado-Ginebra, Ismael; Baqueiro-Hernández, César Iván; Villasís-Keever, Miguel Ángel; Reyes-Morales, Hortensia; Ojeda-Lara, Lucía; Davis-Martínez, Erika Berenice; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Sidonio-Aguayo, Beatriz; Palma-Tavera, Josuha Alexander; Muñoz-Hernández, Onofre
The Child Development Evaluation (or CDE Test) was developed in Mexico as a screening tool for child developmental problems. It yields three possible results: normal, slow development or risk of delay. The modified version was elaborated using the information obtained during the validation study but its properties according to the base population are not known. The objective of this work was to establish diagnostic confirmation of developmental delay in children 16- to 59-months of age previously identified as having risk of delay through the CDE Test in primary care facilities. A population-based cross-sectional study was conducted in one Mexican state. CDE test was administered to 11,455 children 16- to 59-months of age from December/2013 to March/2014. The eligible population represented the 6.2% of the children (n=714) who were identified at risk of delay through the CDE Test. For inclusion in the study, a block randomization stratified by sex and age group was performed. Each participant included in the study had a diagnostic evaluation using the Battelle Development Inventory, 2 nd edition. From the 355 participants included with risk of delay, 65.9% were male and 80.2% were from rural areas; 6.5% were false positives (Total Development Quotient ˃90) and 6.8% did not have any domain with delay (Domain Developmental Quotient <80). The proportion of delay for each domain was as follows: communication 82.5%; cognitive 80.8%; social-personal 33.8%; motor 55.5%; and adaptive 41.7%. There were significant differences in the percentages of delay both by age and by domain/subdomain evaluated. In 93.2% of the participants, developmental delay was corroborated in at least one domain evaluated. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Jabri, Mohamed-Amine; Sani, Mamane; Rtibi, Kais; Marzouki, Lamjed; El-Benna, Jamel; Sakly, Mohsen; Sebai, Hichem
2016-03-31
The aim of this study was to evaluate the protective effects of subacute pre-treatment with chamomile (Matricaria recutita L.) decoction extract (CDE) against stimulated neutrophils ROS production as well as ethanol (EtOH)-induced haematological changes and erythrocytes oxidative stress in rat. Neutrophils were isolated and ROS generation was measured by luminol-amplified chemiluminescence. Superoxide anion generation was detected by the cytochrome c reduction assay. Adult male wistar rats were used and divided into six groups of ten each: control, EtOH, EtOH + various doses of CDE (25, 50, and 100 mg/kg, b.w.), and EtOH+ ascorbic acid (AA). Animals were pre-treated with CDE extract during 10 days. We found that CDE inhibited (P ≤ 0.0003) luminol-amplified chemiluminescence of resting neutrophils and N-formyl methionylleucyl-phenylalanine (fMLF) or phorbolmyristate acetate (PMA) stimulated neutrophils, in a dose-dependent manner. CDE had no effect on superoxide anion, but it inhibited (P ≤ 0.0004) H2O2 production in cell free system. In vivo, CDE counteracted (P ≤ 0.0034) the effect of single EtOH administration which induced (P < 0.0001) an increase of white blood cells (WBC) and platelets (PLT) counts. Our results also demonstrated that alcohol administration significantly (P < 0.0001) induced erythrocytes lipoperoxidation increase and depletion of sulfhydryl groups (-SH) content as well as antioxidant enzyme activities as superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx). More importantly, we found that acute alcohol administration increased (P < 0.0001) erythrocytes and plasma hydrogen peroxide (H2O2), free iron, and calcium levels while the CDE pre-treatment reversed increased (P ≤ 0.0051) all these intracellular disturbances. These findings suggest that CDE inhibits neutrophil ROS production and protects against EtOH-induced haematologiacal parameters changes and erythrocytes oxidative stress. The haematoprotection offered by chamomile might involve in part its antioxidant properties as well as its opposite effect on some intracellular mediators such as H2O2, free iron, and calcium.
Azaïs, Henri; Bresson, Lucie; Bassil, Alfred; Katdare, Ninad; Merlot, Benjamin; Houpeau, Jean-Louis; El Bedoui, Sophie; Meurant, Jean-Pierre; Tresch, Emmanuelle; Narducci, Fabrice
2015-01-01
Totally implantable venous access port systems (TIVAPS) are a widely used and an essential tool in the efficient delivery of chemotherapy. Chemotherapy drug extravasation (CDE) can have dire consequences and will delay treatment. The purpose of this study is to both clarify the management of CDE and show the effectiveness of early surgical lavage (ESL). Patients who had presented to the Cancer Center of Lille (France) with TIVAPS inserted between January 2004 and April 2013 and CDE had their medical records reviewed retrospectively. Thirty patients and 33 events were analyzed. Implicated agents were vesicants (51.5%), irritants (45.5%) and non-vesicants (3%). Huber needle malpositionning was involved in 27 cases. Surgery was performed in 97% of cases, 87.5% of which were for ESL with 53.1% of the latter requiring TIVAPS extraction. Six patients required a second intervention due to adverse outcomes (severe cases). Vesicants were found to be implicated in four out of six severe cases and oxaliplatin in two others. Extravasated volume was above 50 ml in 80% of cases. Only one patient required a skin graft. CDEs should be managed in specialized centers. ESL allows for limited tissue contact of the chemotherapy drug whilst using a simple, widely accessible technique. The two main factors that correlate with adverse outcome seem to be the nature of the implicated agent (vesicants) and the extravasated volume (above 50 ml) leading to worse outcomes. Oxaliplatin should be considered as a vesicant.
Pentoxifylline does not alter the response to inhaled grain dust.
Jagielo, P J; Watt, J L; Quinn, T J; Knapp, H R; Schwartz, D A
1997-05-01
Pentoxifylline (PTX) has been shown to reduce sepsis-induced neutrophil sequestration in the lung and inhibit endotoxin-mediated release of tumor necrosis factor-alpha (TNF-alpha). Previously, we have shown that endotoxin appears to be the principal agent in grain dust causing airway inflammation and airflow obstruction following grain dust inhalation. To determine whether PTX affects the physiologic and inflammatory events following acute grain dust inhalation, 10 healthy, nonsmoking subjects with normal airway reactivity were treated with PTX or placebo (PL) followed by corn dust extract (CDE) inhalation (0.08 mL/kg), using a single-blinded, crossover design. Subjects received PTX (1,200 mg/d) or PL for 4 days prior to CDE inhalation and 400 mg PTX or PL on the exposure day. Both respiratory symptoms and declines in FEV1 and FVC occurred following CDE exposure in both groups, but there were no significant differences in the frequency of symptoms or percent declines from baseline in the FEV1 and FVC at any of the time points measured in the study. Elevations in peripheral blood leukocyte and neutrophil concentrations and BAL total cell, neutrophil, TNF-alpha, and interleukin-8 concentrations were measured 4 h following exposure to CDE in both the PTX- and PL-treated subjects, but no significant differences were found between treatment groups. These results suggest that pretreatment with PTX prior to inhalation of CDE, in the doses used in this study, does not alter the acute physiologic or inflammatory events following exposure to inhaled CDE.
Human reductive halothane metabolism in vitro is catalyzed by cytochrome P450 2A6 and 3A4.
Spracklin, D K; Thummel, K E; Kharasch, E D
1996-09-01
The anesthetic halothane undergoes extensive oxidative and reductive biotransformation, resulting in metabolites that cause hepatotoxicity. Halothane is reduced anaerobically by cytochrome P450 (P450) to the volatile metabolites 2-chloro-1,1-difluoroethene (CDE) and 2-chloro-1,1,1-trifluoroethane (CTE). The purpose of this investigation was to identify the human P450 isoform(s) responsible for reductive halothane metabolism. CDE and CTE formation from halothane metabolism by human liver microsomes was determined by GC/MS analysis. Halothane metabolism to CDE and CTE under reductive conditions was completely inhibited by carbon monoxide, which implicates exclusively P450 in this reaction. Eadie-Hofstee plots of both CDE and CTE formation were nonlinear, suggesting multiple P450 isoform involvement. Microsomal CDE and CTE formation were each inhibited 40-50% by P450 2A6-selective inhibitors (coumarin and 8-methoxypsoralen) and 55-60% by P450 3A4-selective inhibitors (ketoconazole and troleandomycin). P450 1A-, 2B6-, 2C9/10-, and 2D6-selective inhibitors (7,8-benzoflavone, furafylline, orphenadrine, sulfaphenazole, and quinidine) had no significant effect on reductive halothane metabolism. Measurement of product formation catalyzed by a panel of cDNA-expressed P450 isoforms revealed that maximal rates of CDE formation occurred with P450 2A6, followed by P450 3A4. P450 3A4 was the most effective catalyst of CTE formation. Among a panel of 11 different human livers, there were significant linear correlations between the rate of CDE formation and both 2A6 activity (r = 0.64, p < 0.04) and 3A4 activity (r = 0.64, p < 0.03). Similarly, there were significant linear correlations between CTE formation and both 2A6 activity (r = 0.55, p < 0.08) and 3A4 activity (r = 0.77, p < 0.005). The P450 2E1 inhibitors 4-methylpyrazole and diethyldithiocarbamate inhibited CDE and CTE formation by 20-45% and 40-50%, respectively; however, cDNA-expressed P450 2E1 did not catalyze significant amounts of CDE or CTE production, and microsomal metabolite formation was not correlated with P450 2E1 activity. This investigation demonstrated that human liver microsomal reductive halothane metabolism is catalyzed predominantly by P450 2A6 and 3A4. This isoform selectivity for anaerobic halothane metabolism contrasts with that for oxidative human halothane metabolism, which is catalyzed predominantly by P450 2E1.
Wu, Yan; Hao, Yaqiao; Wei, Xuan; Shen, Qi; Ding, Xuanwei; Wang, Liyan; Zhao, Hongxin; Lu, Yuan
2017-01-01
Enterobacter aerogenes is a facultative anaerobe and is one of the most widely studied bacterial strains because of its ability to use a variety of substrates, to produce hydrogen at a high rate, and its high growth rate during dark fermentation. However, the rate of hydrogen production has not been optimized. In this present study, three strategies to improve hydrogen production in E. aerogenes , namely the disruption of nuoCDE , overexpression of the small RNA RyhB and of NadE to regulate global anaerobic metabolism, and the redistribution of metabolic flux. The goal of this study was to clarify the effect of nuoCDE , RyhB, and NadE on hydrogen production and how the perturbation of NADH influences the yield of hydrogen gas from E. aerogenes . NADH dehydrogenase activity was impaired by knocking out nuoCD or nuoCDE in E. aerogenes IAM1183 using the CRISPR-Cas9 system to explore the consequent effect on hydrogen production. The hydrogen yields from IAM1183-CD( ∆nuoC / ∆nuoD ) and IAM1183-CDE ( ∆nuoC / ∆nuoD / ∆nuoE ) increased, respectively, by 24.5 and 45.6% in batch culture (100 mL serum bottles). The hydrogen produced via the NADH pathway increased significantly in IAM1183-CDE, suggesting that nuoE plays an important role in regulating NADH concentration in E. aerogenes . Batch-cultivating experiments showed that by the overexpression of NadE (N), the hydrogen yields of IAM1183/N, IAM1183-CD/N, and IAM1183-CDE/N increased 1.06-, 1.35-, and 1.55-folds, respectively, compared with IAM1183. Particularly worth mentioning is that the strain IAM118-CDE/N reached 2.28 mol in H 2 yield, per mole of glucose consumed. IAN1183/R, IAM1183-CD/R, and IAM1183-CDE/R showed increasing H 2 yields in batch culture. Metabolic flux analysis indicated that increased expression of RyhB led to a significant shift in metabolic patterns. We further investigated IAM1183-CDE/N, which had the best hydrogen-producing traits, as a potential candidate for industry applications using a 5-L fermenter; hydrogen production reached up to 1.95 times greater than that measured for IAM1183. Knockout of nuoCD or nuoCDE and the overexpression of nadE in E. aerogenes resulted in a redistribution of metabolic flux and improved the hydrogen yield. Overexpression of RyhB had an significant change on the hydrogen production via NADH pathway. A combination of strategies would be a novel approach for developing a more economic and efficient bioprocess for hydrogen production in E. aerogenes . Finally, the latest CRISPR-Cas9 technology was successful for editing genes in E. aerogenes to develop our engineered strain for hydrogen production.
[Contribution of medical technologists in team medical care of diabetics].
Sato, Itsuko; Jikimoto, Takumi; Ooyabu, Chinami; Kusuki, Mari; Okano, Yosie; Mukai, Masahiko; Kawano, Seiji; Kumagai, Shunichi
2006-08-01
For the effective treatment of diabetic mellitus (DM), patients are encouraged to self-manage their disease according to the doctor's instructions and advice from certified diabetes educators (CDE) and other comedical staff. Therefore, the cooperation of medical staff consisting of a doctor, CDE, nurse, pharmacist, dietitian, and medical technologist is important for DM education. Medical technologists licensed for CDE (MT-CDE) have been participating in the DM education team in Kobe University Hospital since 2000. MT-CDE are in charge of classes for medical tests, guidance for self-monitoring of blood glucose and teaching how to read the fluctuation graph of the blood glucose level in the education program for hospitalized DM patients. MT-CDEs teach at the bedside how to read the results of medical tests during the first few days of hospitalization using pamphlets for medical tests. The pamphlets are made comprehensible for patients by using graphics and photographs as much as possible. It is important to create a friendly atmosphere and answer frank questions from patients, since they often feel stress when having medical tests at the early stage of hospitalization. This process of questions and answers promotes their understanding of medical tests, and seems to reduce their anxiety about having tests. We repeatedly evaluate their level of understanding during hospitalization. By showing them the fluctuation graph of the glucose level, patients can easily understand the status of their DM. When prescriptions are written on the graph, their therapeutic effects are more comprehensible for the patients. The items written on the graph are chosen to meet the level of understanding of each patient to promote their motivation. In summary, the introduction of MT-CDE has been successful in the education program for DM patients in our hospital. We plan to utilize the skills and knowledge of MT-CDE more in our program so that our DM education program will help patients cope with life with DM.
NASA Technical Reports Server (NTRS)
Shum, Dana; Bugbee, Kaylin
2017-01-01
This talk explains the ongoing metadata curation activities in the Common Metadata Repository. It explores tools that exist today which are useful for building quality metadata and also opens up the floor for discussions on other potentially useful tools.
2010-04-01
Journal of Supply Chain Management , Vol. 4, No. 4. pp. 7-27. [21] Ellram, L. M. (1996): The use of the case study method in logistics research. Journal ...logistics. European Journal of Operational Research, No. 144, pp. 321-332. There is no ´A´ in CD&E, neither for Analysis nor for Anarchy – Ensuring...analytical support as quality assurance. For managers of CD&E, it is necessary to be able to state that scarce resources are being used to develop the
Taruscio, Domenica; Mollo, Emanuela; Gainotti, Sabina; Posada de la Paz, Manuel; Bianchi, Fabrizio; Vittozzi, Luciano
2014-01-01
The European Union acknowledges the relevance of registries as key instruments for developing rare disease (RD) clinical research, improving patient care and health service (HS) planning and funded the EPIRARE project to improve standardization and data comparability among patient registries and to support new registries and data collections. A reference list of patient registry-based indicators has been prepared building on the work of previous EU projects and on the platform stakeholders' information needs resulting from the EPIRARE surveys and consultations. The variables necessary to compute these indicators have been analysed for their scope and use and then organized in data domains. The reference indicators span from disease surveillance, to socio-economic burden, HS monitoring, research and product development, policy equity and effectiveness. The variables necessary to compute these reference indicators have been selected and, with the exception of more sophisticated indicators for research and clinical care quality, they can be collected as data elements common (CDE) to all rare diseases. They have been organized in data domains characterized by their contents and main goal and a limited set of mandatory data elements has been defined, which allows case notification independently of the physician or the health service. The definition of a set of CDE for the European platform for RD patient registration is the first step in the promotion of the use of common tools for the collection of comparable data. The proposed organization of the CDE contributes to the completeness of case ascertainment, with the possible involvement of patients and patient associations in the registration process.
ERIC Educational Resources Information Center
Mihailidis, Paul
2015-01-01
Despite the increased role of digital curation tools and platforms in the daily life of social network users, little research has focused on the competencies and dispositions that young people develop to effectively curate content online. This paper details the results of a mixed method study exploring the curation competencies of young people in…
Grinnon, Stacie T; Miller, Kristy; Marler, John R; Lu, Yun; Stout, Alexandra; Odenkirchen, Joanne; Kunitz, Selma
2012-06-01
In neuroscience clinical research studies, much time and effort are devoted to deciding what data to collect and developing data collection forms and data management systems to capture the data. Many investigators receiving funding from National Institute of Neurological Disorders and Stroke (NINDS), the National Institutes of Health (NIH), are required to share their data once their studies are complete, but the multitude of data definitions and formats make it extremely difficult to aggregate data or perform meta-analyses across studies. In an effort to assist investigators and accelerate data sharing in neuroscience clinical research, the NINDS has embarked upon the Common Data Element (CDE) Project. The data standards developed through the NINDS CDE Project enable clinical investigators to systematically collect data and should facilitate study start-up and data aggregation across the research community. The NINDS CDE Team has taken a systematic, iterative approach to develop the critical core and the disease-specific CDEs. The CDE development process provides a mechanism for community involvement and buy-in, offers a structure for decision making, and includes a technical support team. Upon conclusion of the development process, the CDEs and accompanying tools are available on the Project Web site - http://www.commondataelements.ninds.nih.gov/. The Web site currently includes the critical core (aka general) CDEs that are applicable to all clinical research studies regardless of therapeutic area as well as several disease-specific CDEs. Additional disease-specific CDEs will be added to the Web site once they are developed and vetted over the next 12 months. The CDEs will continue to evolve and will improve only if clinical researchers use and offer feedback about their experience with them. Thus, the NINDS program staff strongly encourages its clinical research grantees to use the CDEs and is expanding its efforts to educate the neuroscience research community about the CDEs and to train research teams to incorporate them into their studies. Version 1.0 of a set of CDEs has been published, but publication is not the end of the development process. All CDEs will be evaluated and revised at least annually to ensure that they reflect current clinical research practices in neuroscience.
The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases
Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning
2014-01-01
IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451
75 FR 60169 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-29
... October 29, 2010 to be assured of consideration. Community Development Financial Institutions (CDFI) Fund...: CDFI/CDE Fund Project Profile Web Form. Form: CDFI 0030. Description: The voluntary collection of narrative descriptions of projects financed by CDFI Fund awardees and allocates via the CDFI/CDE Project...
An overview of the BioCreative 2012 Workshop Track III: interactive text mining task
Arighi, Cecilia N.; Carterette, Ben; Cohen, K. Bretonnel; Krallinger, Martin; Wilbur, W. John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E.; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L.; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P.; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O.; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy
2013-01-01
In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators’ overall experience of a system, regardless of the system’s high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV. PMID:23327936
An overview of the BioCreative 2012 Workshop Track III: interactive text mining task.
Arighi, Cecilia N; Carterette, Ben; Cohen, K Bretonnel; Krallinger, Martin; Wilbur, W John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy
2013-01-01
In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators' overall experience of a system, regardless of the system's high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV.
Assessment of online continuing dental education in North Carolina.
Francis, B; Mauriello, S M; Phillips, C; Englebardt, S; Grayden, S K
2000-01-01
Dental professionals are discovering the unique advantages of asynchronous lifelong learning through continuing dental education (CDE) opportunities offered online. The purpose of this study was to evaluate both the process and outcomes of online CDE in North Carolina. The assessment was designed to provide a better understanding of practicing dental professionals experiences with online CDE and to determine the effectiveness of this learning strategy. Dental professionals from four North Carolina Area Health Education Centers regions evaluated two pilot online CDE modules in 1998. Thirty-one participants were recruited and subsequently enrolled with 23 completing at least one module. Each module included objectives, a multiple-choice pretest, interactive core material, and a post-test. Participants completed three online surveys measuring individual demographics and computer skill level, module design, and use and overall reaction to online learning. Most participants agreed that the modules were comprehensive, were pleasing in appearance, provided clear instructions, provided adequate feedback, and were easy to navigate. Most participants agreed that knowledge of the material increased. This was validated by a significant increase in mean pre- to post-test scores (p = .0001). Participants agreed that convenience was a definite advantage, and they would choose online courses again to meet their CDE needs. The least-liked aspects included technical and formatting issues. Participants were enthusiastic about online learning and learned effectively with this teaching strategy, but desired much more interactivity than existed in the current design.
76 FR 17660 - National Institute of Mental Health; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-30
... Health, Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Open...: National Institutes of Health, Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E..., NIH, Neuroscience Center, 6001 Executive Blvd., Room 6154, MSC 9609, Bethesda, MD 20892-9609, 301-443...
OntoMate: a text-mining tool aiding curation at the Rat Genome Database
Liu, Weisong; Laulederkind, Stanley J. F.; Hayman, G. Thomas; Wang, Shur-Jen; Nigam, Rajni; Smith, Jennifer R.; De Pons, Jeff; Dwinell, Melinda R.; Shimoyama, Mary
2015-01-01
The Rat Genome Database (RGD) is the premier repository of rat genomic, genetic and physiologic data. Converting data from free text in the scientific literature to a structured format is one of the main tasks of all model organism databases. RGD spends considerable effort manually curating gene, Quantitative Trait Locus (QTL) and strain information. The rapidly growing volume of biomedical literature and the active research in the biological natural language processing (bioNLP) community have given RGD the impetus to adopt text-mining tools to improve curation efficiency. Recently, RGD has initiated a project to use OntoMate, an ontology-driven, concept-based literature search engine developed at RGD, as a replacement for the PubMed (http://www.ncbi.nlm.nih.gov/pubmed) search engine in the gene curation workflow. OntoMate tags abstracts with gene names, gene mutations, organism name and most of the 16 ontologies/vocabularies used at RGD. All terms/ entities tagged to an abstract are listed with the abstract in the search results. All listed terms are linked both to data entry boxes and a term browser in the curation tool. OntoMate also provides user-activated filters for species, date and other parameters relevant to the literature search. Using the system for literature search and import has streamlined the process compared to using PubMed. The system was built with a scalable and open architecture, including features specifically designed to accelerate the RGD gene curation process. With the use of bioNLP tools, RGD has added more automation to its curation workflow. Database URL: http://rgd.mcw.edu PMID:25619558
Cognitive Curations of Collaborative Curricula
ERIC Educational Resources Information Center
Ackerman, Amy S.
2015-01-01
Assuming the role of learning curators, 22 graduate students (in-service teachers) addressed authentic problems (challenges) within their respective classrooms by selecting digital tools as part of implementation of interdisciplinary lesson plans. Students focused on formative assessment tools as a means to gather evidence to make improvements in…
Biocuration workflows and text mining: overview of the BioCreative 2012 Workshop Track II.
Lu, Zhiyong; Hirschman, Lynette
2012-01-01
Manual curation of data from the biomedical literature is a rate-limiting factor for many expert curated databases. Despite the continuing advances in biomedical text mining and the pressing needs of biocurators for better tools, few existing text-mining tools have been successfully integrated into production literature curation systems such as those used by the expert curated databases. To close this gap and better understand all aspects of literature curation, we invited submissions of written descriptions of curation workflows from expert curated databases for the BioCreative 2012 Workshop Track II. We received seven qualified contributions, primarily from model organism databases. Based on these descriptions, we identified commonalities and differences across the workflows, the common ontologies and controlled vocabularies used and the current and desired uses of text mining for biocuration. Compared to a survey done in 2009, our 2012 results show that many more databases are now using text mining in parts of their curation workflows. In addition, the workshop participants identified text-mining aids for finding gene names and symbols (gene indexing), prioritization of documents for curation (document triage) and ontology concept assignment as those most desired by the biocurators. DATABASE URL: http://www.biocreative.org/tasks/bc-workshop-2012/workflow/.
Ontology-based reusable clinical document template production system.
Nam, Sejin; Lee, Sungin; Kim, James G Boram; Kim, Hong-Gee
2012-01-01
Clinical documents embody professional clinical knowledge. This paper shows an effective clinical document template (CDT) production system that uses a clinical description entity (CDE) model, a CDE ontology, and a knowledge management system called STEP that manages ontology-based clinical description entities. The ontology represents CDEs and their inter-relations, and the STEP system stores and manages CDE ontology-based information regarding CDTs. The system also provides Web Services interfaces for search and reasoning over clinical entities. The system was populated with entities and relations extracted from 35 CDTs that were used in admission, discharge, and progress reports, as well as those used in nursing and operation functions. A clinical document template editor is shown that uses STEP.
ERIC Educational Resources Information Center
Ball, Anna L.; Bowling, Amanda M.; Sharpless, Justin D.
2016-01-01
School Based Agricultural Education (SBAE) teachers can use coaching behaviors, along with their agricultural content knowledge to help their Career Development Event (CDE) teams succeed. This mixed methods, collective case study observed three SBAE teachers preparing multiple CDEs throughout the CDE season. The teachers observed had a previous…
Colorado Department of Education Abbreviated Information Management Annual Plan (CDE IMAP).
ERIC Educational Resources Information Center
Laughlin, Richard
The Colorado Department of Education (CDE), including Access Colorado Library and Information Network (ACLIN) and the Colorado School for the Deaf and the Blind, provides state- level guidance and resources for Colorado's local school districts, local and regional libraries, and special populations. Technology is the enabler that offers basic,…
ERIC Educational Resources Information Center
Levy, Alissa Beth
2012-01-01
The California Department of Education (CDE) has long asserted that success Algebra I by Grade 8 is the goal for all California public school students. In fact, the state's accountability system penalizes schools that do not require all of their students to take the Algebra I end-of-course examination by Grade 8 (CDE, 2009). In this dissertation,…
77 FR 24972 - National Institute of Mental Health Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
..., Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Closed: 1:15 p.m. to..., Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Contact Person: Jane... Health, NIH, Neuroscience Center, 6001 Executive Blvd., Room 6154, MSC 9609, Bethesda, MD 20892-9609, 301...
76 FR 51379 - National Institute of Mental Health Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-18
..., Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Closed: 1:30 p.m. to..., Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Contact Person: Jane..., NIH, Neuroscience Center, 6001 Executive Blvd., Room 6154, MSC 9609, Bethesda, MD 20892-9609, 301-443...
78 FR 52551 - National Institute of Mental Health; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-23
...), Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Closed: 2:30 p.m. to..., Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Contact Person: Jane... Health, NIH, Neuroscience Center, 6001 Executive Blvd., Room 6154, MSC 9609, Bethesda, MD 20892-9609, 301...
77 FR 48998 - National Institute of Mental Health; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-15
... Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Closed: 2:30 p.m. to... Neuroscience Center, 6001 Executive Boulevard, Conference Room C/D/E, Rockville, MD 20852. Contact Person: Jane... Health, NIH, Neuroscience Center, 6001 Executive Blvd., Room 6154, MSC 9609, Bethesda, MD 20892-9609, 301...
The Nanomaterial Data Curation Initiative (NDCI) explores the critical aspect of data curation within the development of informatics approaches to understanding nanomaterial behavior. Data repositories and tools for integrating and interrogating complex nanomaterial datasets are...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akita, Shingo; Kubota, Koji; Kobayashi, Akira, E-mail: kbys@shinshu-u.ac.jp
Highlights: Black-Right-Pointing-Pointer BMC-derived PSCs play a role in a rat CDE diet-induced pancreatitis model. Black-Right-Pointing-Pointer BMC-derived PSCs contribute mainly to the early stage of pancreatic fibrosis. Black-Right-Pointing-Pointer BMC-derived activated PSCs can produce PDGF and TGF {beta}1. -- Abstract: Bone marrow cell (BMC)-derived myofibroblast-like cells have been reported in various organs, including the pancreas. However, the contribution of these cells to pancreatic fibrosis has not been fully discussed. The present study examined the possible involvement of pancreatic stellate cells (PSCs) originating from BMCs in the development of pancreatic fibrosis in a clinically relevant rat model of acute pancreatitis induced by amore » choline-deficient/ethionine-supplemented (CDE) diet. BMCs from female transgenic mice ubiquitously expressing green fluorescent protein (GFP) were transplanted into lethally irradiated male rats. Once chimerism was established, acute pancreatitis was induced by a CDE diet. Chronological changes in the number of PSCs originating from the donor BMCs were examined using double immunofluorescence for GFP and markers for PSCs, such as desmin and alpha smooth muscle actin ({alpha}SMA), 1, 3 and 8 weeks after the initiation of CDE feeding. We also used immunohistochemical staining to evaluate whether the PSCs from the BMCs produce growth factors, such as platelet-derived growth factor (PDGF) and transforming growth factor (TGF) {beta}1. The percentage of BMC-derived activated PSCs increased significantly, peaking after 1 week of CDE treatment (accounting for 23.3 {+-} 0.9% of the total population of activated PSCs) and then decreasing. These cells produced both PDGF and TGF{beta}1 during the early stage of pancreatic fibrosis. Our results suggest that PSCs originating from BMCs contribute mainly to the early stage of pancreatic injury, at least in part, by producing growth factors in a rat CDE diet-induced pancreatitis model.« less
Investigating Astromaterials Curation Applications for Dexterous Robotic Arms
NASA Technical Reports Server (NTRS)
Snead, C. J.; Jang, J. H.; Cowden, T. R.; McCubbin, F. M.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center is currently investigating tools and methods that will enable the curation of future astromaterials collections. Size and temperature constraints for astromaterials to be collected by current and future proposed missions will require the development of new robotic sample and tool handling capabilities. NASA Curation has investigated the application of robot arms in the past, and robotic 3-axis micromanipulators are currently in use for small particle curation in the Stardust and Cosmic Dust laboratories. While 3-axis micromanipulators have been extremely successful for activities involving the transfer of isolated particles in the 5-20 micron range (e.g. from microscope slide to epoxy bullet tip, beryllium SEM disk), their limited ranges of motion and lack of yaw, pitch, and roll degrees of freedom restrict their utility in other applications. For instance, curators removing particles from cosmic dust collectors by hand often employ scooping and rotating motions to successfully free trapped particles from the silicone oil coatings. Similar scooping and rotating motions are also employed when isolating a specific particle of interest from an aliquot of crushed meteorite. While cosmic dust curators have been remarkably successful with these kinds of particle manipulations using handheld tools, operator fatigue limits the number of particles that can be removed during a given extraction session. The challenges for curation of small particles will be exacerbated by mission requirements that samples be processed in N2 sample cabinets (i.e. gloveboxes). We have been investigating the use of compact robot arms to facilitate sample handling within gloveboxes. Six-axis robot arms potentially have applications beyond small particle manipulation. For instance, future sample return missions may involve biologically sensitive astromaterials that can be easily compromised by physical interaction with a curator; other potential future returned samples may require cryogenic curation. Robot arms may be combined with high resolution cameras within a sample cabinet and controlled remotely by curator. Sophisticated robot arm and hand combination systems can be programmed to mimic the movements of a curator wearing a data glove; successful implementation of such a system may ultimately allow a curator to virtually operate in a nitrogen, cryogenic, or biologically sensitive environment with dexterity comparable to that of a curator physically handling samples in a glove box.
Gao, Guangyao; Fu, Bojie; Zhan, Hongbin; Ma, Ying
2013-05-01
Predicting the fate and movement of contaminant in soils and groundwater is essential to assess and reduce the risk of soil contamination and groundwater pollution. Reaction processes of contaminant often decreased monotonously with depth. Time-dependent input sources usually occurred at the inlet of natural or human-made system such as radioactive waste disposal site. This study presented a one-dimensional convection-dispersion equation (CDE) for contaminant transport in soils with depth-dependent reaction coefficients and time-dependent inlet boundary conditions, and derived its analytical solution. The adsorption coefficient and degradation rate were represented as sigmoidal functions of soil depth. Solute breakthrough curves (BTCs) and concentration profiles obtained from CDE with depth-dependent and constant reaction coefficients were compared, and a constant effective reaction coefficient, which was calculated by arithmetically averaging the depth-dependent reaction coefficient, was proposed to reflect the lumped depth-dependent reaction effect. With the effective adsorption coefficient and degradation rate, CDE could produce similar BTCs and concentration profiles as those from CDE with depth-dependent reactions in soils with moderate chemical heterogeneity. In contrast, the predicted concentrations of CDE with fitted reaction coefficients at a certain depth departed significantly from those of CDE with depth-dependent reactions. Parametric analysis was performed to illustrate the effects of sinusoidally and exponentially decaying input functions on solute BTCs. The BTCs and concentration profiles obtained from the solutions for finite and semi-infinite domain were compared to investigate the effects of effluent boundary condition. The finite solution produced higher concentrations at the increasing limb of the BTCs and possessed a higher peak concentration than the semi-infinite solution which had a slightly long tail. Furthermore, the finite solution gave a higher concentration in the immediate vicinity of the exit boundary than the semi-infinite solution. The applicability of the proposed model was tested with a field herbicide and tracer leaching experiment in an agricultural area of northeastern Greece. The simulation results indicated that the proposed CDE with depth-dependent reaction coefficients was able to capture the evolution of metolachlor concentration at the upper soil depths. However, the simulation results at deep depths were not satisfactory as the proposed model did not account for preferential flow observed in the field. Copyright © 2013 Elsevier Ltd. All rights reserved.
Online Learning Policy and Practice Survey: A Survey of the States
ERIC Educational Resources Information Center
Center for Digital Education, 2008
2008-01-01
In 2008, the Center for Digital Education (CDE) conducted a review of state policy and programs to determine the status of online learning policy and practice across the United States. CDE interviewed state education officials across the nation to evaluate the overall landscape of online learning. The rankings reflect the vision, policies,…
A computational platform to maintain and migrate manual functional annotations for BioCyc databases.
Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A
2014-10-12
BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.
USDA-ARS?s Scientific Manuscript database
The Maize Genetics and Genomics Database (MaizeGDB) team prepared a survey to identify breeders’ needs for visualizing pedigrees, diversity data, and haplotypes in order to prioritize tool development and curation efforts at MaizeGDB. The survey was distributed to the maize research community on beh...
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.
The Role of Community-Driven Data Curation for Enterprises
NASA Astrophysics Data System (ADS)
Curry, Edward; Freitas, Andre; O'Riáin, Sean
With increased utilization of data within their operational and strategic processes, enterprises need to ensure data quality and accuracy. Data curation is a process that can ensure the quality of data and its fitness for use. Traditional approaches to curation are struggling with increased data volumes, and near real-time demands for curated data. In response, curation teams have turned to community crowd-sourcing and semi-automatedmetadata tools for assistance. This chapter provides an overview of data curation, discusses the business motivations for curating data and investigates the role of community-based data curation, focusing on internal communities and pre-competitive data collaborations. The chapter is supported by case studies from Wikipedia, The New York Times, Thomson Reuters, Protein Data Bank and ChemSpider upon which best practices for both social and technical aspects of community-driven data curation are described.
A Case Study of Learning, Motivation, and Performance Strategies for Teaching and Coaching CDE Teams
ERIC Educational Resources Information Center
Ball, Anna; Bowling, Amanda; Bird, Will
2016-01-01
This intrinsic case study examined the case of students on CDE (Career Development Event) teams preparing for state competitive events and the teacher preparing them in a school with a previous exemplary track record of winning multiple state and national career development events. The students were interviewed multiple times during the 16-week…
NASA Astrophysics Data System (ADS)
Chang, Ailian; Sun, HongGuang; Zheng, Chunmiao; Lu, Bingqing; Lu, Chengpeng; Ma, Rui; Zhang, Yong
2018-07-01
Fractional-derivative models have been developed recently to interpret various hydrologic dynamics, such as dissolved contaminant transport in groundwater. However, they have not been applied to quantify other fluid dynamics, such as gas transport through complex geological media. This study reviewed previous gas transport experiments conducted in laboratory columns and real-world oil-gas reservoirs and found that gas dynamics exhibit typical sub-diffusive behavior characterized by heavy late-time tailing in the gas breakthrough curves (BTCs), which cannot be effectively captured by classical transport models. Numerical tests and field applications of the time fractional convection-diffusion equation (fCDE) have shown that the fCDE model can capture the observed gas BTCs including their apparent positive skewness. Sensitivity analysis further revealed that the three parameters used in the fCDE model, including the time index, the convection velocity, and the diffusion coefficient, play different roles in interpreting the delayed gas transport dynamics. In addition, the model comparison and analysis showed that the time fCDE model is efficient in application. Therefore, the time fractional-derivative models can be conveniently extended to quantify gas transport through natural geological media such as complex oil-gas reservoirs.
One-loop matching and running with covariant derivative expansion
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
2018-01-24
We develop tools for performing effective field theory (EFT) calculations in a manifestly gauge-covariant fashion. We clarify how functional methods account for one-loop diagrams resulting from the exchange of both heavy and light fields, as some confusion has recently arisen in the literature. To efficiently evaluate functional traces containing these “mixed” one-loop terms, we develop a new covariant derivative expansion (CDE) technique that is capable of evaluating a much wider class of traces than previous methods. The technique is detailed in an appendix, so that it can be read independently from the rest of this work. We review the well-knownmore » matching procedure to one-loop order with functional methods. What we add to this story is showing how to isolate one-loop terms coming from diagrams involving only heavy propagators from diagrams with mixed heavy and light propagators. This is done using a non-local effective action, which physically connects to the notion of “integrating out” heavy fields. Lastly, we show how to use a CDE to do running analyses in EFTs, i.e. to obtain the anomalous dimension matrix. We demonstrate the methodologies by several explicit example calculations.« less
Quorum Sensing Regulation of Competence and Bacteriocins in Streptococcus pneumoniae and mutans
Shanker, Erin; Federle, Michael J.
2017-01-01
The human pathogens Streptococcus pneumoniae and Streptococcus mutans have both evolved complex quorum sensing (QS) systems that regulate the production of bacteriocins and the entry into the competent state, a requirement for natural transformation. Natural transformation provides bacteria with a mechanism to repair damaged genes or as a source of new advantageous traits. In S. pneumoniae, the competence pathway is controlled by the two-component signal transduction pathway ComCDE, which directly regulates SigX, the alternative sigma factor required for the initiation into competence. Over the past two decades, effectors of cellular killing (i.e., fratricides) have been recognized as important targets of the pneumococcal competence QS pathway. Recently, direct interactions between the ComCDE and the paralogous BlpRH pathway, regulating bacteriocin production, were identified, further strengthening the interconnections between these two QS systems. Interestingly, a similar theme is being revealed in S. mutans, the primary etiological agent of dental caries. This review compares the relationship between the bacteriocin and the competence QS pathways in both S. pneumoniae and S. mutans, and hopes to provide clues to regulatory pathways across the genus Streptococcus as a potential tool to efficiently investigate putative competence pathways in nontransformable streptococci. PMID:28067778
One-loop matching and running with covariant derivative expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
We develop tools for performing effective field theory (EFT) calculations in a manifestly gauge-covariant fashion. We clarify how functional methods account for one-loop diagrams resulting from the exchange of both heavy and light fields, as some confusion has recently arisen in the literature. To efficiently evaluate functional traces containing these “mixed” one-loop terms, we develop a new covariant derivative expansion (CDE) technique that is capable of evaluating a much wider class of traces than previous methods. The technique is detailed in an appendix, so that it can be read independently from the rest of this work. We review the well-knownmore » matching procedure to one-loop order with functional methods. What we add to this story is showing how to isolate one-loop terms coming from diagrams involving only heavy propagators from diagrams with mixed heavy and light propagators. This is done using a non-local effective action, which physically connects to the notion of “integrating out” heavy fields. Lastly, we show how to use a CDE to do running analyses in EFTs, i.e. to obtain the anomalous dimension matrix. We demonstrate the methodologies by several explicit example calculations.« less
One-loop matching and running with covariant derivative expansion
NASA Astrophysics Data System (ADS)
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
2018-01-01
We develop tools for performing effective field theory (EFT) calculations in a manifestly gauge-covariant fashion. We clarify how functional methods account for one-loop diagrams resulting from the exchange of both heavy and light fields, as some confusion has recently arisen in the literature. To efficiently evaluate functional traces containing these "mixed" one-loop terms, we develop a new covariant derivative expansion (CDE) technique that is capable of evaluating a much wider class of traces than previous methods. The technique is detailed in an appendix, so that it can be read independently from the rest of this work. We review the well-known matching procedure to one-loop order with functional methods. What we add to this story is showing how to isolate one-loop terms coming from diagrams involving only heavy propagators from diagrams with mixed heavy and light propagators. This is done using a non-local effective action, which physically connects to the notion of "integrating out" heavy fields. Lastly, we show how to use a CDE to do running analyses in EFTs, i.e. to obtain the anomalous dimension matrix. We demonstrate the methodologies by several explicit example calculations.
Grain dust-induced lung inflammation is reduced by Rhodobacter sphaeroides diphosphoryl lipid A.
Jagielo, P J; Quinn, T J; Qureshi, N; Schwartz, D A
1998-01-01
To further determine the importance of endotoxin in grain dust-induced inflammation of the lower respiratory tract, we evaluated the efficacy of pentaacylated diphosphoryl lipid A derived from the lipopolysaccharide of Rhodobacter sphaeroides (RsDPLA) as a partial agonist of grain dust-induced airway inflammation. RsDPLA is a relatively inactive compound compared with lipid A derived from Escherichia coli (LPS) and has been demonstrated to act as a partial agonist of LPS-induced inflammation. To assess the potential stimulatory effect of RsDPLA in relation to LPS, we incubated THP-1 cells with RsDPLA (0.001-100 micrograms/ml), LPS (0.02 microgram endotoxin activity/ml), or corn dust extract (CDE; 0.02 microgram endotoxin activity/ml). Incubation with RsDPLA revealed a tumor necrosis factor (TNF)-alpha stimulatory effect at 100 micrograms/ml. In contrast, incubation with LPS or CDE resulted in TNF-alpha release at 0.02 microgram/ml. Pretreatment of THP-1 cells with varying concentrations of RsDPLA before incubation with LPS or CDE (0.02 microgram endotoxin activity/ml) resulted in a dose-dependent reduction in the LPS- or CDE-induced release of TNF-alpha with concentrations of RsDPLA of up to 10 micrograms/ml but not at 100 micrograms/ml. To further understand the role of endotoxin in grain dust-induced airway inflammation, we utilized the unique LPS inhibitory property of RsDPLA to determine the inflammatory response to inhaled CDE in mice in the presence of RsDPLA. Ten micrograms of RsDPLA intratracheally did not cause a significant inflammatory response compared with intratracheal saline. However, pretreatment of mice with 10 micrograms of RsDPLA intratracheally before exposure to CDE (5.4 and 0.2 micrograms/m3) or LPS (7.2 and 0.28 micrograms/m3) resulted in significant reductions in the lung lavage concentrations of total cells, neutrophils, and specific proinflammatory cytokines compared with mice pretreated with sterile saline. These results confirm the LPS-inhibitory effect of RsDPLA and support the role of endotoxin as the principal agent in grain dust causing airway inflammation.
Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A
2018-01-01
Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this strategy will significantly advance the curation status of all organism-specific databases in SolCyc resulting in the improvement on database accuracy, data analysis and visualization of biochemical networks in those species. Database URL https://solgenomics.net/tools/solcyc/ PMID:29762652
Sharma, Bibek; Patino, Reynaldo
2010-01-01
To assess interaction effects between cadmium (Cd, a putative xenoestrogen) and estradiol-17beta (E(2)) on sex differentiation and metamorphosis, Xenopus laevis were exposed to solvent-control (0.005% ethanol), Cd (10microgL(-1)), E(2) (1microgL(-1)), or Cd and E(2) (Cd+E(2)) in FETAX medium from fertilization to 75d postfertilization. Each treatment was applied to four aquaria, each with 30 fertilized eggs. Mortality was recorded and animals were sampled as they completed metamorphosis (Nieuwkoop and Faber stage 66). Gonadal sex of individuals (including >or= tadpoles NF stage 55 at day 75) was determined gross-morphologically and used to compute sex ratios. Time course and percent completion of metamorphosis, snout-vent length (SVL), hindlimb length (HLL) and weight were analyzed for each gender separately. Survival rates did not differ among treatments. The E(2) and Cd+E(2) treatments significantly skewed sex ratios towards females; however, no sex-ratio differences were observed between the control and Cd treatments or between the E(2) and Cd+E(2) treatments. Time course of metamorphosis was generally delayed and percent completion of metamorphosis was generally reduced in males and females exposed to Cd, E(2) or their combination compared to control animals. In males, but not females, the effect of Cd+E(2) was greater than that of individual chemicals. Weight at completion of metamorphosis was reduced only in females and only by the Cd+E(2) treatment. In conclusion, although Cd at an environmentally relevant concentration did not exhibit direct or indirect feminizing effects in Xenopus tadpoles, the metal and E(2) both had similar inhibitory effects on metamorphosis that were of greater magnitude in males than females.
The BioCyc collection of microbial genomes and metabolic pathways.
Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi
2017-08-17
BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Correlation transfer and diffusion of ultrasound-modulated multiply scattered light.
Sakadzić, Sava; Wang, Lihong V
2006-04-28
We develop a temporal correlation transfer equation (CTE) and a temporal correlation diffusion equation (CDE) for ultrasound-modulated multiply scattered light. These equations can be applied to an optically scattering medium with embedded optically scattering and absorbing objects to calculate the power spectrum of light modulated by a nonuniform ultrasound field. We present an analytical solution based on the CDE and Monte Carlo simulation results for light modulated by a cylinder of ultrasound in an optically scattering slab. We further validate with experimental measurements the numerical calculations for an actual ultrasound field. The CTE and CDE are valid for moderate ultrasound pressures and on a length scale comparable with the optical transport mean-free path. These equations should be applicable to a wide spectrum of conditions for ultrasound-modulated optical tomography of soft biological tissues.
A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework.
Bandrowski, A E; Cachat, J; Li, Y; Müller, H M; Sternberg, P W; Ciccarese, P; Clark, T; Marenco, L; Wang, R; Astakhov, V; Grethe, J S; Martone, M E
2012-01-01
The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is 'hidden' from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community. DATABASE URL: http://neuinfo.org.
A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework
Bandrowski, A. E.; Cachat, J.; Li, Y.; Müller, H. M.; Sternberg, P. W.; Ciccarese, P.; Clark, T.; Marenco, L.; Wang, R.; Astakhov, V.; Grethe, J. S.; Martone, M. E.
2012-01-01
The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is ‘hidden’ from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community. Database URL: http://neuinfo.org PMID:22434839
Wang, Yanchao; Sunderraman, Rajshekhar
2006-01-01
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.
Davis, Allan Peter; Wiegers, Thomas C.; Murphy, Cynthia G.; Mattingly, Carolyn J.
2011-01-01
The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and convert free-text information into a structured format using official nomenclature, integrating third party controlled vocabularies for chemicals, genes, diseases and organisms, and a novel controlled vocabulary for molecular interactions. Manual curation produces a robust, richly annotated dataset of highly accurate and detailed information. Currently, CTD describes over 349 000 molecular interactions between 6800 chemicals, 20 900 genes (for 330 organisms) and 4300 diseases that have been manually curated from over 25 400 peer-reviewed articles. This manually curated data are further integrated with other third party data (e.g. Gene Ontology, KEGG and Reactome annotations) to generate a wealth of toxicogenomic relationships. Here, we describe our approach to manual curation that uses a powerful and efficient paradigm involving mnemonic codes. This strategy allows biocurators to quickly capture detailed information from articles by generating simple statements using codes to represent the relationships between data types. The paradigm is versatile, expandable, and able to accommodate new data challenges that arise. We have incorporated this strategy into a web-based curation tool to further increase efficiency and productivity, implement quality control in real-time and accommodate biocurators working remotely. Database URL: http://ctd.mdibl.org PMID:21933848
STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation
2013-01-01
Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969
CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database
Jia, Baofeng; Raphenya, Amogelang R.; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K.; Lago, Briony A.; Dave, Biren M.; Pereira, Sheldon; Sharma, Arjun N.; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E.; Frye, Jonathan G.; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L.; Pawlowski, Andrew C.; Johnson, Timothy A.; Brinkman, Fiona S.L.; Wright, Gerard D.; McArthur, Andrew G.
2017-01-01
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis. PMID:27789705
Sequencing Data Discovery and Integration for Earth System Science with MetaSeek
NASA Astrophysics Data System (ADS)
Hoarfrost, A.; Brown, N.; Arnosti, C.
2017-12-01
Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.
Ozyurt, Ibrahim Burak; Grethe, Jeffrey S; Martone, Maryann E; Bandrowski, Anita E
2016-01-01
The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web). RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual resources through an innovative literature citation index honed for research resources. Here we present an overview of the Registry and show how the RDW tools are used in curation and usage tracking.
Ozyurt, Ibrahim Burak; Grethe, Jeffrey S.; Martone, Maryann E.; Bandrowski, Anita E.
2016-01-01
The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web). RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual resources through an innovative literature citation index honed for research resources. Here we present an overview of the Registry and show how the RDW tools are used in curation and usage tracking. PMID:26730820
Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce
2015-01-01
Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.
ITEP: an integrated toolkit for exploration of microbial pan-genomes.
Benedict, Matthew N; Henriksen, James R; Metcalf, William W; Whitaker, Rachel J; Price, Nathan D
2014-01-03
Comparative genomics is a powerful approach for studying variation in physiological traits as well as the evolution and ecology of microorganisms. Recent technological advances have enabled sequencing large numbers of related genomes in a single project, requiring computational tools for their integrated analysis. In particular, accurate annotations and identification of gene presence and absence are critical for understanding and modeling the cellular physiology of newly sequenced genomes. Although many tools are available to compare the gene contents of related genomes, new tools are necessary to enable close examination and curation of protein families from large numbers of closely related organisms, to integrate curation with the analysis of gain and loss, and to generate metabolic networks linking the annotations to observed phenotypes. We have developed ITEP, an Integrated Toolkit for Exploration of microbial Pan-genomes, to curate protein families, compute similarities to externally-defined domains, analyze gene gain and loss, and generate draft metabolic networks from one or more curated reference network reconstructions in groups of related microbial species among which the combination of core and variable genes constitute the their "pan-genomes". The ITEP toolkit consists of: (1) a series of modular command-line scripts for identification, comparison, curation, and analysis of protein families and their distribution across many genomes; (2) a set of Python libraries for programmatic access to the same data; and (3) pre-packaged scripts to perform common analysis workflows on a collection of genomes. ITEP's capabilities include de novo protein family prediction, ortholog detection, analysis of functional domains, identification of core and variable genes and gene regions, sequence alignments and tree generation, annotation curation, and the integration of cross-genome analysis and metabolic networks for study of metabolic network evolution. ITEP is a powerful, flexible toolkit for generation and curation of protein families. ITEP's modular design allows for straightforward extension as analysis methods and tools evolve. By integrating comparative genomics with the development of draft metabolic networks, ITEP harnesses the power of comparative genomics to build confidence in links between genotype and phenotype and helps disambiguate gene annotations when they are evaluated in both evolutionary and metabolic network contexts.
Wan, Min; Zhang, Wenhua; Tian, Yangli; Xu, Chanjuan; Xu, Tao; Liu, Jianfeng; Zhang, Rongying
2015-01-01
Endocytosis and postendocytic sorting of G-protein-coupled receptors (GPCRs) is important for the regulation of both their cell surface density and signaling profile. Unlike the mechanisms of clathrin-dependent endocytosis (CDE), the mechanisms underlying the control of GPCR signaling by clathrin-independent endocytosis (CIE) remain largely unknown. Among the muscarinic acetylcholine receptors (mAChRs), the M4 mAChR undergoes CDE and recycling, whereas the M2 mAChR is internalized through CIE and targeted to lysosomes. Here we investigated the endocytosis and postendocytic trafficking of M2 mAChR based on a comparative analysis of the third cytoplasmic domain in M2 and M4 mAChRs. For the first time, we identified that the sequence 374KKKPPPS380 servers as a sorting signal for the clathrin-independent internalization of M2 mAChR. Switching 374KKKPPPS380 to the i3 loop of the M4 mAChR shifted the receptor into lysosomes through the CIE pathway; and therefore away from CDE and recycling. We also found another previously unidentified sequence that guides CDE of the M2 mAChR, 361VARKIVKMTKQPA373, which is normally masked in the presence of the downstream sequence 374KKKPPPS380. Taken together, our data indicate that endocytosis and postendocytic sorting of GPCRs that undergo CIE could be sequence-dependent. PMID:26094760
(Non-)symbolic magnitude processing in children with mathematical difficulties: A meta-analysis.
Schwenk, Christin; Sasanguie, Delphine; Kuhn, Jörg-Tobias; Kempe, Sophia; Doebler, Philipp; Holling, Heinz
2017-05-01
Symbolic and non-symbolic magnitude representations, measured by digit or dot comparison tasks, are assumed to underlie the development of arithmetic skills. The comparison distance effect (CDE) has been suggested as a hallmark of the preciseness of mental magnitude representations. It implies that two magnitudes are harder to discriminate when the numerical distance between them is small, and may therefore differ in children with mathematical difficulties (MD), i.e. low mathematical achievement or dyscalculia. However, empirical findings on the CDE in children with MD are heterogeneous, and only few studies assess both symbolic and non-symbolic skills. This meta-analysis therefore integrates 44 symbolic and 48 non-symbolic response time (RT) outcomes reported in nineteen studies (N=1630 subjects, aged 6-14 years). Independent of age, children with MD show significantly longer mean RTs than typically achieving controls, particularly on symbolic (Hedges' g=0.75; 95% CI [0.51; 0.99]), but to a significantly lower extent also on non-symbolic (g=0.24; 95% CI [0.13; 0.36]) tasks. However, no group differences were found for the CDE. Extending recent work, these meta-analytical findings on children with MD corroborate the diagnostic importance of magnitude comparison speed in symbolic tasks. By contrast, the validity of CDE measures in assessing MD is questioned. Copyright © 2017 Elsevier Ltd. All rights reserved.
Long-range dynamic polarization potentials for 11Be projectiles on 64Zn
NASA Astrophysics Data System (ADS)
So, W. Y.; Kim, K. S.; Choi, K. S.; Cheoun, Myung-Ki
2015-07-01
We investigate the effects of the long-range dynamic polarization (LRDP) potential, which consists of the Coulomb dipole excitation (CDE) potential and the long-range nuclear (LRN) potential, for the 11Be projectile on 64Zn. To study these effects, we perform a χ2 analysis of an optical model including the LRDP potential as well as a conventional short-range nuclear (SRN) potential. To take these effects into account, we argue that both the CDE and LRN potentials are essential to explaining the experimental values of PE, which is the ratio of the elastic scattering cross section to the Rutherford cross section. The Coulomb and nuclear parts of the LRDP potential are found to contribute to a strong absorption effect. Strong absorption occurs because the real part of the CDE and LRN potentials lowers the barrier, and the imaginary part of the CDE and LRN potentials removes the flux from the elastic channel in the 11Be+64Zn system. Finally, we extract the total reaction cross section σR including the inelastic, breakup, and fusion cross sections. The contribution of the inelastic scattering by the first excited state at ɛx1 st=0.32 MeV (1 /2-) is found to be relatively large and cannot be ignored. In addition, our results are shown to agree quite well with the experimental breakup reaction cross section by using a fairly large radius parameter.
Transcriptional regulation of Saccharomyces cerevisiaeCYS3 encoding cystathionine γ-lyase
Hiraishi, Hiroyuki; Miyake, Tsuyoshi
2008-01-01
In studying the regulation of GSH11, the structural gene of the high-affinity glutathione transporter (GSH-P1) in Saccharomyces cerevisiae, a cis-acting cysteine responsive element, CCGCCACAC (CCG motif), was detected. Like GSH-P1, the cystathionine γ-lyase encoded by CYS3 is induced by sulfur starvation and repressed by addition of cysteine to the growth medium. We detected a CCG motif (−311 to −303) and a CGC motif (CGCCACAC; −193 to −186), which is one base shorter than the CCG motif, in the 5′-upstream region of CYS3. One copy of the centromere determining element 1, CDE1 (TCACGTGA; −217 to −210), being responsible for regulation of the sulfate assimilation pathway genes, was also detected. We tested the roles of these three elements in the regulation of CYS3. Using a lacZ-reporter assay system, we found that the CCG/CGC motif is required for activation of CYS3, as well as for its repression by cysteine. In contrast, the CDE1 motif was responsible for only activation of CYS3. We also found that two transcription factors, Met4 and VDE, are responsible for activation of CYS3 through the CCG/CGC and CDE1 motifs. These observations suggest a dual regulation of CYS3 by factors that interact with the CDE1 motif and the CCG/CGC motifs. PMID:18317767
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discover tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out nonrelevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin; Fox, Peter (Editor); Norack, Tom (Editor)
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discovery tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out non-relevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Patel, Ashokkumar A; Gilbertson, John R; Parwani, Anil V; Dhir, Rajiv; Datta, Milton W; Gupta, Rajnish; Berman, Jules J; Melamed, Jonathan; Kajdacsy-Balla, Andre; Orenstein, Jan; Becich, Michael J
2006-05-05
Advances in molecular biology and growing requirements from biomarker validation studies have generated a need for tissue banks to provide quality-controlled tissue samples with standardized clinical annotation. The NCI Cooperative Prostate Cancer Tissue Resource (CPCTR) is a distributed tissue bank that comprises four academic centers and provides thousands of clinically annotated prostate cancer specimens to researchers. Here we describe the CPCTR information management system architecture, common data element (CDE) development, query interfaces, data curation, and quality control. Data managers review the medical records to collect and continuously update information for the 145 clinical, pathological and inventorial CDEs that the Resource maintains for each case. An Access-based data entry tool provides de-identification and a standard communication mechanism between each group and a central CPCTR database. Standardized automated quality control audits have been implemented. Centrally, an Oracle database has web interfaces allowing multiple user-types, including the general public, to mine de-identified information from all of the sites with three levels of specificity and granularity as well as to request tissues through a formal letter of intent. Since July 2003, CPCTR has offered over 6,000 cases (38,000 blocks) of highly characterized prostate cancer biospecimens, including several tissue microarrays (TMA). The Resource developed a website with interfaces for the general public as well as researchers and internal members. These user groups have utilized the web-tools for public query of summary data on the cases that were available, to prepare requests, and to receive tissues. As of December 2005, the Resource received over 130 tissue requests, of which 45 have been reviewed, approved and filled. Additionally, the Resource implemented the TMA Data Exchange Specification in its TMA program and created a computer program for calculating PSA recurrence. Building a biorepository infrastructure that meets today's research needs involves time and input of many individuals from diverse disciplines. The CPCTR can provide large volumes of carefully annotated prostate tissue for research initiatives such as Specialized Programs of Research Excellence (SPOREs) and for biomarker validation studies and its experience can help development of collaborative, large scale, virtual tissue banks in other organ systems.
NASA Astrophysics Data System (ADS)
le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.
2017-12-01
The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts.
Ravagli, Carlo; Pognan, Francois; Marc, Philippe
2017-01-01
The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com This software is designed to run on a Java EE application server and store data in a relational database. philippe.marc@novartis.com. © The Author 2016. Published by Oxford University Press.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts
Ravagli, Carlo; Pognan, Francois
2017-01-01
Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099
MaizeGDB: New tools and resource
USDA-ARS?s Scientific Manuscript database
MaizeGDB, the USDA-ARS genetics and genomics database, is a highly curated, community-oriented informatics service to researchers focused on the crop plant and model organism Zea mays. MaizeGDB facilitates maize research by curating, integrating, and maintaining a database that serves as the central...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krock, L.P.; Navalta, R.; Myhre, L.G.
An open bilayer ground-crew chemical defense ensemble (CDE) was proposed to reduce the thermal burden during vapor-only exposure periods. This study compared the thermal-stress profile of the proposed ensemble to that produced by the currently employed closed CDE. Four subjects, alternating ensembles on separate days, walked on a treadmill in an environmental chamber at 5.3 km/h (3.3 mph) and 2% grade (an energy expenditure of 350 kcal/h) for alternating work/rest to achieve significant recovery. Mean total sweat production was lower (1.38 vs. 2.50 liters) and percent sweat evaporation greater (65.7% vs. 30.0%) in the prototype ensemble than in the CDE.more » The prototype ensemble provided greater heat dissipation and allowed more-efficient sweat evaporation which had the double benefit of reducing heat storage and limiting dehydration.« less
Control of head lice with a coconut-derived emulsion shampoo.
Connolly, M; Stafford, K A; Coles, G C; Kennedy, C T C; Downs, A M R
2009-01-01
To evaluate a novel coconut-derived emulsion (CDE) shampoo against head lice infestation in children. A school trial in which pupils were treated on days 0 and 7 and checked on days 8 and 15 and a family trial where product was applied by parents three times in 2 weeks or used as a cosmetic shampoo and checked on days 14 and days 70. UK schools in Bristol and Western-super-Mare and families in Northern Somerset. Numbers of children free from infestation after treatment. In the school trial, percentage cures at day 8 were 14% (permethrin, n=7) and 61% (CDE, n=37). In the family trial where all family members were treated, cure rate was 96% (n=28), and if the shampoo was subsequently used as a cosmetic shampoo, only 1 of 12 children became re-infested after 10 weeks. CDE shampoo is a novel effective method of controlling head lice and used after treatment as a cosmetic shampoo can aid in the reduction of re-infestation.
Oh, Lawrence J; Nguyen, Chu Luan; Wong, Eugene; Wang, Samuel S Y; Francis, Ian C
2017-01-01
To evaluate surgical outcomes (SOs) and visual outcomes (VOs) in cataract surgery comparing the Centurion ® phacoemulsification system (CPS) with the Infiniti ® phacoemulsification system (IPS). Prospective, consecutive study in a single-site private practice. Totally 412 patients undergoing cataract surgery with either the CPS using the 30-degree balanced ® tip ( n =207) or the IPS using the 30-degree Kelman ® tip ( n =205). Intraoperative and postoperative outcomes were documented prospectively up to one month follow-up. Nuclear sclerosis (NS) grade, cumulated dissipated energy (CDE), preoperative corrected distance visual acuity (CDVA), and CDVA at one month were recorded. CDE was 13.50% less in the whole CPS compared with the whole IPS subcohort. In eyes with NS grade III or greater, CDE was 28.87% less with CPS ( n =70) compared with IPS ( n =44) ( P =0.010). Surgical complications were not statistically different between the two subcohorts ( P =0.083), but in the one case of vitreous loss using the CPS, CDVA of 6/4 was achieved at one month. The mean CDVAs (VOs) at one month for NS grade III and above cataracts were -0.17 logMAR (6/4.5) in the CPS and -0.15 logMAR (6/4.5) in the IPS subcohort respectively ( P =0.033). CDE is 28.87% less, and VOs are significantly improved, in denser cataracts in the CPS compared with the IPS. The authors recommend the CPS for cases with denser nuclei.
Anastasilakis, Konstantinos; Mourgela, Anna; Symeonidis, Chrysanthos; Dimitrakos, Stavros A; Ekonomidis, Panayiotis; Tsinopoulos, Ioannis
2015-01-01
To study postoperative macular thickness fluctuations measured by spectral-domain optical coherence tomography (SD-OCT) and to investigate a potential correlation among macular edema (ME) incidence, cumulative dissipated energy (CDE) released during phacoemulsification, and vitreoretinal interface status. This is a prospective, cross-sectional study of 106 cataract patients with no macular disorder who underwent phacoemulsification. Best-corrected visual acuity measurement, slit-lamp examination, OCT scans were performed preoperatively and 30 and 90 days postoperatively. The intraoperative parameters measured were CDE and total phacoemulsification time. The SD-OCT parameters assessed were central subfield thickness (CST), cube average thickness (CAT), cube macular volume, vitreoretinal interface status, and presence of cystoid or diffuse ME. Four patients (3.8%) developed subclinical ME. Regarding ME, there was no significant difference between patients with presence or absence of posterior vitreous detachment (chi-square, p = 0.57), although 75% of ME cases were observed in patients with attached posterior vitreous. With regard to comparison between eyes with and without subclinical CME incidence, CDE (p = 0.05), phacoemulsification time (p = 0.001), CST at month 1 (p = 0.002), cube macular volume at month 1 (p = 0.039), and CAT at month 1 (p = 0.050) were significantly higher in the subclinical CME group. This study provides evidence that OCT macular thickness parameters increase significantly at first and third month postoperatively and that the incidence of pseudophakic ME can be affected by CDE.
Lu, Zhiyong
2012-01-01
Today’s biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/ PMID:23160414
Curating and Nudging in Virtual CLIL Environments
ERIC Educational Resources Information Center
Nielsen, Helle Lykke
2014-01-01
Foreign language teachers can benefit substantially from the notions of curation and nudging when scaffolding CLIL activities on the internet. This article shows how these principles can be integrated into CLILstore, a free multimedia-rich learning tool with seamless access to online dictionaries, and presents feedback from first and second year…
Radiographic outcomes among South African coal miners.
Naidoo, Rajen N; Robins, Thomas G; Solomon, A; White, Neil; Franzblau, Alfred
2004-10-01
This study, the first to document the prevalence of pneumoconiosis among a living South African coal mining cohort, describes dose-response relationships between coal workers' pneumoconiosis and respirable dust exposure, and relationships between pneumoconiosis and both lung function deterioration and respiratory symptoms. A total of 684 current miners and 188 ex-miners from three bituminous-coal mines in Mpumalanga, South Africa, was studied. Chest radiographs were read according to the International Labour Organization (ILO) classification by two experienced readers, one an accredited National Institute for Occupational Safety and Health (NIOSH) "B" reader. Interviews were conducted to assess symptoms, work histories (also obtained from company records), smoking, and other risk factors. Spirometry was performed by trained technicians. Cumulative respirable dust exposure (CDE) estimates were constructed from historical company-collected sampling and researcher-collected personal dust measurements. kappa-Statistics compared the radiographic outcomes predicted by the two readers. An average profusion score was used in the analysis for the outcomes of interest. Because of possible confounding by employment status, most analyses were stratified on current and ex-miner status. The overall prevalence of pneumoconiosis was low (2%-4%). The degree of agreement between the two readers for profusion was moderate to high (kappa=0.58). A significant association (P<0.001) and trend (P<0.001) was seen for pneumoconiosis with increasing categories of CDE among current miners only. A significant (P<0.0001) additional 58 mg-years/m3 CDE was seen among those with pneumoconiosis compared to those without. CDE contributed to a statistically significant 0.19% and 0.11% greater decline in the percent predicted 1-second forced expiration volume (FEV1) and forced vital capacity (FVC), respectively, among current miners with pneumoconiosis than among those without. Logistic regression models showed no significant relationships between pneumoconiosis and symptoms. The overall prevalence of pneumoconiosis, although significantly associated with CDE, was low. The presence of pneumoconiosis is associated with meaningful health effects, including deterioration in lung function. Intervention measures that control exposure are indicated, to reduce these functional effects.
Oh, Lawrence J.; Nguyen, Chu Luan; Wong, Eugene; Wang, Samuel S.Y.; Francis, Ian C.
2017-01-01
AIM To evaluate surgical outcomes (SOs) and visual outcomes (VOs) in cataract surgery comparing the Centurion® phacoemulsification system (CPS) with the Infiniti® phacoemulsification system (IPS). METHODS Prospective, consecutive study in a single-site private practice. Totally 412 patients undergoing cataract surgery with either the CPS using the 30-degree balanced® tip (n=207) or the IPS using the 30-degree Kelman® tip (n=205). Intraoperative and postoperative outcomes were documented prospectively up to one month follow-up. Nuclear sclerosis (NS) grade, cumulated dissipated energy (CDE), preoperative corrected distance visual acuity (CDVA), and CDVA at one month were recorded. RESULTS CDE was 13.50% less in the whole CPS compared with the whole IPS subcohort. In eyes with NS grade III or greater, CDE was 28.87% less with CPS (n=70) compared with IPS (n=44) (P=0.010). Surgical complications were not statistically different between the two subcohorts (P=0.083), but in the one case of vitreous loss using the CPS, CDVA of 6/4 was achieved at one month. The mean CDVAs (VOs) at one month for NS grade III and above cataracts were -0.17 logMAR (6/4.5) in the CPS and -0.15 logMAR (6/4.5) in the IPS subcohort respectively (P=0.033). CONCLUSION CDE is 28.87% less, and VOs are significantly improved, in denser cataracts in the CPS compared with the IPS. The authors recommend the CPS for cases with denser nuclei. PMID:29181313
Helvacioglu, Firat; Yeter, Celal; Tunc, Zeki; Sencan, Sadik
2013-08-01
To compare the safety and efficacy of Ozil Intelligent Phaco torsional microcoaxial phacoemulsification surgeries performed with 12-degree and 22-degree bent tips using the Infiniti Vision System. Maltepe University School of Medicine Department of Ophthalmology, Istanbul, Turkey. Comparative case series. Eyes were assigned to 2.2 mm microcoaxial phacoemulsification using the torsional mode with a 22-degree bent tip (Group 1) or a 12-degree bent tip (Group 2). The primary outcome measures were ultrasound time (UST), cumulative dissipated energy (CDE), longitudinal and torsional ultrasound (US) amplitudes, mean surgical time, mean volume of balanced salt solution used, and surgical complications. Both groups included 45 eyes. The mean UST, CDE, longitudinal US amplitude, and torsional US amplitude were 65 seconds ± 27.23 (SD), 11.53 ± 6.99, 0.22 ± 0.26, and 42.86 ± 15.64, respectively, in Group 1 and 84 ± 45.04 seconds, 16.68 ± 10.66, 0.48 ± 0.68, and 46.27 ± 14.74, respectively, in Group 2. The mean UST, CDE, and longitudinal amplitudes were significantly lower in Group 1 (P=.003, P=.008, and P=.022, respectively). The mean volume of balanced salt solution was 73.33 ± 28.58 cc in Group 1 and 82.08 ± 26.21 cc in Group 2 (P=.134). Torsional phacoemulsification performed with 22-degree bent tips provided more effective lens removal than 12-degree bent tips, with a lower UST and CDE. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Burkett, P. J.
2011-01-01
NASA Johnson Space Center operates clean curation facilities for Apollo lunar, Antarctic meteorite, stratospheric cosmic dust, Stardust comet and Genesis solar wind samples. Each of these collections is curated separately due unique requirements. The purpose of this abstract is to highlight the technical tensions between providing particulate cleanliness and molecular cleanliness, illustrated using data from curation laboratories. Strict control of three components are required for curating samples cleanly: a clean environment; clean containers and tools that touch samples; and use of non-shedding materials of cleanable chemistry and smooth surface finish. This abstract focuses on environmental cleanliness and the technical tension between achieving particulate and molecular cleanliness. An environment in which a sample is manipulated or stored can be a room, an enclosed glovebox (or robotic isolation chamber) or an individual sample container.
Directly e-mailing authors of newly published papers encourages community curation
Bunt, Stephanie M.; Grumbling, Gary B.; Field, Helen I.; Marygold, Steven J.; Brown, Nicholas H.; Millburn, Gillian H.
2012-01-01
Much of the data within Model Organism Databases (MODs) comes from manual curation of the primary research literature. Given limited funding and an increasing density of published material, a significant challenge facing all MODs is how to efficiently and effectively prioritize the most relevant research papers for detailed curation. Here, we report recent improvements to the triaging process used by FlyBase. We describe an automated method to directly e-mail corresponding authors of new papers, requesting that they list the genes studied and indicate (‘flag’) the types of data described in the paper using an online tool. Based on the author-assigned flags, papers are then prioritized for detailed curation and channelled to appropriate curator teams for full data extraction. The overall response rate has been 44% and the flagging of data types by authors is sufficiently accurate for effective prioritization of papers. In summary, we have established a sustainable community curation program, with the result that FlyBase curators now spend less time triaging and can devote more effort to the specialized task of detailed data extraction. Database URL: http://flybase.org/ PMID:22554788
Leon, Pia; Umari, Ingrid; Mangogna, Alessandro; Zanei, Andrea; Tognetto, Daniele
2016-01-01
To evaluate and compare the intraoperative parameters and postoperative outcomes of torsional mode and longitudinal mode of phacoemulsification. Pertinent studies were identified by a computerized MEDLINE search from January 2002 to September 2013. The Meta-analysis is composed of two parts. In the first part the intraoperative parameters were considered: ultrasound time (UST) and cumulative dissipated energy (CDE). The intraoperative values were also distinctly considered for two categories (moderate and hard cataract group) depending on the nuclear opacity grade. In the second part of the study the postoperative outcomes as the best corrected visual acuity (BCVA) and the endothelial cell loss (ECL) were taken in consideration. The UST and CDE values proved statistically significant in support of torsional mode for both moderate and hard cataract group. The analysis of BCVA did not present statistically significant difference between the two surgical modalities. The ECL count was statistically significant in support of torsional mode (P<0.001). The Meta-analysis shows the superiority of the torsional mode for intraoperative parameters (UST, CDE) and postoperative ECL outcomes.
Leon, Pia; Umari, Ingrid; Mangogna, Alessandro; Zanei, Andrea; Tognetto, Daniele
2016-01-01
AIM To evaluate and compare the intraoperative parameters and postoperative outcomes of torsional mode and longitudinal mode of phacoemulsification. METHODS Pertinent studies were identified by a computerized MEDLINE search from January 2002 to September 2013. The Meta-analysis is composed of two parts. In the first part the intraoperative parameters were considered: ultrasound time (UST) and cumulative dissipated energy (CDE). The intraoperative values were also distinctly considered for two categories (moderate and hard cataract group) depending on the nuclear opacity grade. In the second part of the study the postoperative outcomes as the best corrected visual acuity (BCVA) and the endothelial cell loss (ECL) were taken in consideration. RESULTS The UST and CDE values proved statistically significant in support of torsional mode for both moderate and hard cataract group. The analysis of BCVA did not present statistically significant difference between the two surgical modalities. The ECL count was statistically significant in support of torsional mode (P<0.001). CONCLUSION The Meta-analysis shows the superiority of the torsional mode for intraoperative parameters (UST, CDE) and postoperative ECL outcomes. PMID:27366694
Initial Flight Tests of the NASA F-15B Propulsion Flight Test Fixture
NASA Technical Reports Server (NTRS)
Palumbo, Nathan; Moes, Timothy R.; Vachon, M. Jake
2002-01-01
Flights of the F-15B/Propulsion Flight Test Fixture (PFTF) with a Cone Drag Experiment (CDE) attached have been accomplished at NASA Dryden Flight Research Center. Mounted underneath the fuselage of an F-15B airplane, the PFTF provides volume for experiment systems and attachment points for propulsion experiments. A unique feature of the PFTF is the incorporation of a six-degree-of-freedom force balance. The force balance mounts between the PFTF and experiment and measures three forces and moments. The CDE has been attached to the force balance for envelope expansion flights. This experiment spatially and inertially simulates a large propulsion test article. This report briefly describes the F-15B airplane, the PFTF, and the force balance. A detailed description of the CDE is provided. Force-balance ground testing and stiffness modifications are described. Flight profiles and selected flight data from the envelope expansion flights are provided and discussed, including force-balance data, the internal PFTF thermal and vibration environment, a handling qualities assessment, and performance capabilities of the F-15B airplane with the PFTF installed.
Integrating text mining into the MGI biocuration workflow
Dowell, K.G.; McAndrews-Hill, M.S.; Hill, D.P.; Drabkin, H.J.; Blake, J.A.
2009-01-01
A major challenge for functional and comparative genomics resource development is the extraction of data from the biomedical literature. Although text mining for biological data is an active research field, few applications have been integrated into production literature curation systems such as those of the model organism databases (MODs). Not only are most available biological natural language (bioNLP) and information retrieval and extraction solutions difficult to adapt to existing MOD curation workflows, but many also have high error rates or are unable to process documents available in those formats preferred by scientific journals. In September 2008, Mouse Genome Informatics (MGI) at The Jackson Laboratory initiated a search for dictionary-based text mining tools that we could integrate into our biocuration workflow. MGI has rigorous document triage and annotation procedures designed to identify appropriate articles about mouse genetics and genome biology. We currently screen ∼1000 journal articles a month for Gene Ontology terms, gene mapping, gene expression, phenotype data and other key biological information. Although we do not foresee that curation tasks will ever be fully automated, we are eager to implement named entity recognition (NER) tools for gene tagging that can help streamline our curation workflow and simplify gene indexing tasks within the MGI system. Gene indexing is an MGI-specific curation function that involves identifying which mouse genes are being studied in an article, then associating the appropriate gene symbols with the article reference number in the MGI database. Here, we discuss our search process, performance metrics and success criteria, and how we identified a short list of potential text mining tools for further evaluation. We provide an overview of our pilot projects with NCBO's Open Biomedical Annotator and Fraunhofer SCAI's ProMiner. In doing so, we prove the potential for the further incorporation of semi-automated processes into the curation of the biomedical literature. PMID:20157492
Integrating text mining into the MGI biocuration workflow.
Dowell, K G; McAndrews-Hill, M S; Hill, D P; Drabkin, H J; Blake, J A
2009-01-01
A major challenge for functional and comparative genomics resource development is the extraction of data from the biomedical literature. Although text mining for biological data is an active research field, few applications have been integrated into production literature curation systems such as those of the model organism databases (MODs). Not only are most available biological natural language (bioNLP) and information retrieval and extraction solutions difficult to adapt to existing MOD curation workflows, but many also have high error rates or are unable to process documents available in those formats preferred by scientific journals.In September 2008, Mouse Genome Informatics (MGI) at The Jackson Laboratory initiated a search for dictionary-based text mining tools that we could integrate into our biocuration workflow. MGI has rigorous document triage and annotation procedures designed to identify appropriate articles about mouse genetics and genome biology. We currently screen approximately 1000 journal articles a month for Gene Ontology terms, gene mapping, gene expression, phenotype data and other key biological information. Although we do not foresee that curation tasks will ever be fully automated, we are eager to implement named entity recognition (NER) tools for gene tagging that can help streamline our curation workflow and simplify gene indexing tasks within the MGI system. Gene indexing is an MGI-specific curation function that involves identifying which mouse genes are being studied in an article, then associating the appropriate gene symbols with the article reference number in the MGI database.Here, we discuss our search process, performance metrics and success criteria, and how we identified a short list of potential text mining tools for further evaluation. We provide an overview of our pilot projects with NCBO's Open Biomedical Annotator and Fraunhofer SCAI's ProMiner. In doing so, we prove the potential for the further incorporation of semi-automated processes into the curation of the biomedical literature.
Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael
A screening test is an instrument whose primary function is to identify individuals with a probable disease among an apparently healthy population, establishing risk or suspicion of a disease. Caution must be taken when using a screening tool in order to avoid unrealistic measurements, delaying an intervention for those who may benefit from it. Before introducing a screening test into clinical practice, it is necessary to certify the presence of some characteristics making its worth useful. This "certification" process is called validation. The main objective of this paper is to describe the different steps that must be taken, from the identification of a need for early detection through the generation of a validated and reliable screening tool using, as an example, the process for the modified version of the Child Development Evaluation Test (CDE or Prueba EDI) in Mexico. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Overview of the interactive task in BioCreative V
Wang, Qinghua; S. Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I.; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K.; Jensen, Lars J.; Jimenez, Silvia; Jue, Toni R.; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M.; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D.; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J. F.; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H.; Hirschman, Lynette; Arighi, Cecilia N.
2016-01-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested. Database URL: http://www.biocreative.org PMID:27589961
Overview of the interactive task in BioCreative V
Wang, Qinghua; Abdul, Shabbir S.; Almeida, Lara; ...
2016-09-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a formatmore » similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. Here, the partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested.« less
Disentangling interacting dark energy cosmologies with the three-point correlation function
NASA Astrophysics Data System (ADS)
Moresco, Michele; Marulli, Federico; Baldi, Marco; Moscardini, Lauro; Cimatti, Andrea
2014-10-01
We investigate the possibility of constraining coupled dark energy (cDE) cosmologies using the three-point correlation function (3PCF). Making use of the CODECS N-body simulations, we study the statistical properties of cold dark matter (CDM) haloes for a variety of models, including a fiducial ΛCDM scenario and five models in which dark energy (DE) and CDM mutually interact. We measure both the halo 3PCF, ζ(θ), and the reduced 3PCF, Q(θ), at different scales (2 < r [h-1 Mpc ] < 40) and redshifts (0 ≤ z ≤ 2). In all cDE models considered in this work, Q(θ) appears flat at small scales (for all redshifts) and at low redshifts (for all scales), while it builds up the characteristic V-shape anisotropy at increasing redshifts and scales. With respect to the ΛCDM predictions, cDE models show lower (higher) values of the halo 3PCF for perpendicular (elongated) configurations. The effect is also scale-dependent, with differences between ΛCDM and cDE models that increase at large scales. We made use of these measurements to estimate the halo bias, that results in fair agreement with the one computed from the two-point correlation function (2PCF). The main advantage of using both the 2PCF and 3PCF is to break the bias-σ8 degeneracy. Moreover, we find that our bias estimates are approximately independent of the assumed strength of DE coupling. This study demonstrates the power of a higher order clustering analysis in discriminating between alternative cosmological scenarios, for both present and forthcoming galaxy surveys, such as e.g. Baryon Oscillation Spectroscopic Survey and Euclid.
Gonzalez-Salinas, Roberto; Garza-Leon, Manuel; Saenz-de-Viteri, Manuel; Solis-S, Juan C; Gulias-Cañizo, Rosario; Quiroz-Mercado, Hugo
2017-08-22
To compare the cumulative dissipated energy (CDE), aspiration time and estimated aspiration fluid utilized during phacoemulsification cataract surgery using two phacoemulsification systems . A total of 164 consecutive eyes of 164 patients undergoing cataract surgery, 82 in the active-fluidics group and 82 in the gravity-fluidics group were enrolled in this study. Cataracts graded NII to NIII using LOCS II were included. Each subject was randomly assigned to one of the two platforms with a specific configuration: the active-fluidics Centurion ® phacoemulsification system or the gravity-fluidics Infiniti ® Vision System. CDE, aspiration time (AT) and the mean estimated aspiration fluid (EAF) were registered and compared. A mean age of 68.3 ± 9.8 years was found (range 57-92 years), and no significant difference was evident between both groups. A positive correlation between the CDE values obtained by both platforms was verified (r = 0.271, R 2 = 0.073, P = 0.013). Similarly, a significant correlation was evidenced for the EAF (r = 0.334, R 2 = 0.112, P = 0.046) and AT values (r = 0.156, R 2 = 0.024, P = 0.161). A statistically significantly lower CDE count, aspiration time and estimated fluid were obtained using the active-fluidics configuration when compared to the gravity-fluidics configuration by 19.29, 12.10 and 9.29%, respectively (P = 0.001, P < 0.0001 and P = 0.001). The active-fluidics Centurion ® phacoemulsification system achieved higher surgical efficiency than the gravity-fluidics Infiniti ® IP system for NII and NIII cataracts.
Role of Tellurite Resistance Operon in Filamentous Growth of Yersinia pestis in Macrophages.
Ponnusamy, Duraisamy; Clinkenbeard, Kenneth D
2015-01-01
Yersinia pestis initiates infection by parasitism of host macrophages. In response to macrophage infections, intracellular Y. pestis can assume a filamentous cellular morphology which may mediate resistance to host cell innate immune responses. We previously observed the expression of Y. pestis tellurite resistance proteins TerD and TerE from the terZABCDE operon during macrophage infections. Others have observed a filamentous response associated with expression of tellurite resistance operon in Escherichia coli exposed to tellurite. Therefore, in this study we examine the potential role of Y. pestis tellurite resistance operon in filamentous cellular morphology during macrophage infections. In vitro treatment of Y. pestis culture with sodium tellurite (Na2TeO3) caused the bacterial cells to assume a filamentous phenotype similar to the filamentous phenotype observed during macrophage infections. A deletion mutant for genes terZAB abolished the filamentous morphologic response to tellurite exposure or intracellular parasitism, but without affecting tellurite resistance. However, a terZABCDE deletion mutant abolished both filamentous morphologic response and tellurite resistance. Complementation of the terZABCDE deletion mutant with terCDE, but not terZAB, partially restored tellurite resistance. When the terZABCDE deletion mutant was complemented with terZAB or terCDE, Y. pestis exhibited filamentous morphology during macrophage infections as well as while these complemented genes were being expressed under an in vitro condition. Further in E. coli, expression of Y. pestis terZAB, but not terCDE, conferred a filamentous phenotype. These findings support the role of Y. pestis terZAB mediation of the filamentous response phenotype; whereas, terCDE confers tellurite resistance. Although the beneficial role of filamentous morphological responses by Y. pestis during macrophage infections is yet to be fully defined, it may be a bacterial adaptive strategy to macrophage associated stresses.
On expert curation and scalability: UniProtKB/Swiss-Prot as a case study
Arighi, Cecilia N; Magrane, Michele; Bateman, Alex; Wei, Chih-Hsuan; Lu, Zhiyong; Boutet, Emmanuel; Bye-A-Jee, Hema; Famiglietti, Maria Livia; Roechert, Bernd; UniProt Consortium, The
2017-01-01
Abstract Motivation Biological knowledgebases, such as UniProtKB/Swiss-Prot, constitute an essential component of daily scientific research by offering distilled, summarized and computable knowledge extracted from the literature by expert curators. While knowledgebases play an increasingly important role in the scientific community, their ability to keep up with the growth of biomedical literature is under scrutiny. Using UniProtKB/Swiss-Prot as a case study, we address this concern via multiple literature triage approaches. Results With the assistance of the PubTator text-mining tool, we tagged more than 10 000 articles to assess the ratio of papers relevant for curation. We first show that curators read and evaluate many more papers than they curate, and that measuring the number of curated publications is insufficient to provide a complete picture as demonstrated by the fact that 8000–10 000 papers are curated in UniProt each year while curators evaluate 50 000–70 000 papers per year. We show that 90% of the papers in PubMed are out of the scope of UniProt, that a maximum of 2–3% of the papers indexed in PubMed each year are relevant for UniProt curation, and that, despite appearances, expert curation in UniProt is scalable. Availability and implementation UniProt is freely available at http://www.uniprot.org/. Contact sylvain.poux@sib.swiss Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036270
[Design and validation of a questionnaire for psychosocial nursing diagnosis in Primary Care].
Brito-Brito, Pedro Ruymán; Rodríguez-Álvarez, Cristobalina; Sierra-López, Antonio; Rodríguez-Gómez, José Ángel; Aguirre-Jaime, Armando
2012-01-01
To develop a valid, reliable and easy-to-use questionnaire for a psychosocial nursing diagnosis. The study was performed in two phases: first phase, questionnaire design and construction; second phase, validity and reliability tests. A bank of items was constructed using the NANDA classification as a theoretical framework. Each item was assigned a Likert scale or dichotomous response. The combination of responses to the items constituted the diagnostic rules to assign up to 28 labels. A group of experts carried out the validity test for content. Other validated scales were used as reference standards for the criterion validity tests. Forty-five nurses provided the questionnaire to the patients on three separate occasions over a period of three weeks, and the other validated scales only once to 188 randomly selected patients in Primary Care centres in Tenerife (Spain). Validity tests for construct confirmed the six dimensions of the questionnaire with 91% of total variance explained. Validity tests for criterion showed a specificity of 66%-100%, and showed high correlations with the reference scales when the questionnaire was assigning nursing diagnoses. Reliability tests showed agreement of 56%-91% (P<.001), and a 93% internal consistency. The Questionnaire for Psychosocial Nursing Diagnosis was called CdePS, and included 61 items. The CdePS is a valid, reliable and easy-to-use tool in Primary Care centres to improve the assigning of a psychosocial nursing diagnosis. Copyright © 2011 Elsevier España, S.L. All rights reserved.
Chandonia, John-Marc; Fox, Naomi K; Brenner, Steven E
2017-02-03
SCOPe (Structural Classification of Proteins-extended, http://scop.berkeley.edu) is a database of relationships between protein structures that extends the Structural Classification of Proteins (SCOP) database. SCOP is an expert-curated ordering of domains from the majority of proteins of known structure in a hierarchy according to structural and evolutionary relationships. SCOPe classifies the majority of protein structures released since SCOP development concluded in 2009, using a combination of manual curation and highly precise automated tools, aiming to have the same accuracy as fully hand-curated SCOP releases. SCOPe also incorporates and updates the ASTRAL compendium, which provides several databases and tools to aid in the analysis of the sequences and structures of proteins classified in SCOPe. SCOPe continues high-quality manual classification of new superfamilies, a key feature of SCOP. Artifacts such as expression tags are now separated into their own class, in order to distinguish them from the homology-based annotations in the remainder of the SCOPe hierarchy. SCOPe 2.06 contains 77,439 Protein Data Bank entries, double the 38,221 structures classified in SCOP. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
ClinVar miner: Demonstrating utility of a web-based tool for viewing and filtering clinvar data.
Henrie, Alex; Hemphill, Sarah E; Ruiz-Schultz, Nicole; Cushman, Brandon; DiStefano, Marina T; Azzariti, Danielle; Harrison, Steven M; Rehm, Heidi L; Eilbeck, Karen
2018-05-23
ClinVar Miner is a web-based suite that utilizes the data held in the National Center for Biotechnology Information's ClinVar archive. The goal is to render the data more accessible to processes pertaining to conflict resolution of variant interpretation as well as tracking details of data submission and data management for detailed variant curation. Here we establish the use of these tools to address three separate use-cases and to perform analyses across submissions. We demonstrate that the ClinVar Miner tools are an effective means to browse and consolidate data for variant submitters, curation groups, and general oversight. These tools are also relevant to the variant interpretation community in general. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Reengineering Workflow for Curation of DICOM Datasets.
Bennett, William; Smith, Kirk; Jarosz, Quasar; Nolan, Tracy; Bosch, Walter
2018-06-15
Reusable, publicly available data is a pillar of open science and rapid advancement of cancer imaging research. Sharing data from completed research studies not only saves research dollars required to collect data, but also helps insure that studies are both replicable and reproducible. The Cancer Imaging Archive (TCIA) is a global shared repository for imaging data related to cancer. Insuring the consistency, scientific utility, and anonymity of data stored in TCIA is of utmost importance. As the rate of submission to TCIA has been increasing, both in volume and complexity of DICOM objects stored, the process of curation of collections has become a bottleneck in acquisition of data. In order to increase the rate of curation of image sets, improve the quality of the curation, and better track the provenance of changes made to submitted DICOM image sets, a custom set of tools was developed, using novel methods for the analysis of DICOM data sets. These tools are written in the programming language perl, use the open-source database PostgreSQL, make use of the perl DICOM routines in the open-source package Posda, and incorporate DICOM diagnostic tools from other open-source packages, such as dicom3tools. These tools are referred to as the "Posda Tools." The Posda Tools are open source and available via git at https://github.com/UAMS-DBMI/PosdaTools . In this paper, we briefly describe the Posda Tools and discuss the novel methods employed by these tools to facilitate rapid analysis of DICOM data, including the following: (1) use a database schema which is more permissive, and differently normalized from traditional DICOM databases; (2) perform integrity checks automatically on a bulk basis; (3) apply revisions to DICOM datasets on an bulk basis, either through a web-based interface or via command line executable perl scripts; (4) all such edits are tracked in a revision tracker and may be rolled back; (5) a UI is provided to inspect the results of such edits, to verify that they are what was intended; (6) identification of DICOM Studies, Series, and SOP instances using "nicknames" which are persistent and have well-defined scope to make expression of reported DICOM errors easier to manage; and (7) rapidly identify potential duplicate DICOM datasets by pixel data is provided; this can be used, e.g., to identify submission subjects which may relate to the same individual, without identifying the individual.
Computational analysis in support of the SSTO flowpath test
NASA Astrophysics Data System (ADS)
Duncan, Beverly S.; Trefny, Charles J.
1994-10-01
A synergistic approach of combining computational methods and experimental measurements is used in the analysis of a hypersonic inlet. There are four major focal points within this study which examine the boundary layer growth on a compression ramp upstream of the cowl lip of a scramjet inlet. Initially, the boundary layer growth on the NASP Concept Demonstrator Engine (CDE) is examined. The follow-up study determines the optimum diverter height required by the SSTO Flowpath test to best duplicate the CDE results. These flow field computations are then compared to the experimental measurements and the mass average Mach number is determined for this inlet.
Computational Analysis in Support of the SSTO Flowpath Test
NASA Technical Reports Server (NTRS)
Duncan, Beverly S.; Trefny, Charles J.
1994-01-01
A synergistic approach of combining computational methods and experimental measurements is used in the analysis of a hypersonic inlet. There are four major focal points within this study which examine the boundary layer growth on a compression ramp upstream of the cowl lip of a scramjet inlet. Initially, the boundary layer growth on the NASP Concept Demonstrator Engine (CDE) is examined. The follow-up study determines the optimum diverter height required by the SSTO Flowpath test to best duplicate the CDE results. These flow field computations are then compared to the experimental measurements and the mass average Mach number is determined for this inlet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iouri I. Balachov; Takao Kobayashi; Francis Tanzella
2004-11-17
This work contributes to the design of safe and economical Generation-IV Super-Critical Water Reactors (SCWRs) by providing a basis for selecting structural materials to ensure the functionality of in-vessel components during the entire service life. During the second year of the project, we completed electrochemical characterization of the oxide film properties and investigation of crack initiation and propagation for candidate structural materials steels under supercritical conditions. We ranked candidate alloys against their susceptibility to environmentally assisted degradation based on the in situ data measure with an SRI-designed controlled distance electrochemistry (CDE) arrangement. A correlation between measurable oxide film properties andmore » susceptibility of austenitic steels to environmentally assisted degradation was observed experimentally. One of the major practical results of the present work is the experimentally proven ability of the economical CDE technique to supply in situ data for ranking candidate structural materials for Generation-IV SCRs. A potential use of the CDE arrangement developed ar SRI for building in situ sensors monitoring water chemistry in the heat transport circuit of Generation-IV SCWRs was evaluated and proved to be feasible.« less
Chlorella vulgaris triggers apoptosis in hepatocarcinogenesis-induced rats*
Mohd Azamai, Emey Suhana; Sulaiman, Suhaniza; Mohd Habib, Shafina Hanim; Looi, Mee Lee; Das, Srijit; Abdul Hamid, Nor Aini; Wan Ngah, Wan Zurinah; Mohd Yusof, Yasmin Anum
2009-01-01
Chlorella vulgaris (CV) has been reported to have antioxidant and anticancer properties. We evaluated the effect of CV on apoptotic regulator protein expression in liver cancer-induced rats. Male Wistar rats (200~250 g) were divided into eight groups: control group (normal diet), CDE group (choline deficient diet supplemented with ethionine in drinking water to induce hepatocarcinogenesis), CV groups with three different doses of CV (50, 150, and 300 mg/kg body weight), and CDE groups treated with different doses of CV (50, 150, and 300 mg/kg body weight). Rats were sacrificed at various weeks and liver tissues were embedded in paraffin blocks for immunohistochemistry studies. CV, at increasing doses, decreased the expression of anti-apoptotic protein, Bcl-2, but increased the expression of pro-apoptotic protein, caspase 8, in CDE rats, which was correlated with decreased hepatoctyes proliferation and increased apoptosis as determined by bromodeoxy-uridine (BrdU) labeling and terminal deoxynucleotidyl transferase mediated dUTP nick-end labeling (TUNEL) assay, respectively. Our study shows that CV has definite chemopreventive effect by inducing apoptosis via decreasing the expression of Bcl-2 and increasing the expression of caspase 8 in hepatocarcinogenesis-induced rats. PMID:19198018
Building an efficient curation workflow for the Arabidopsis literature corpus
Li, Donghui; Berardini, Tanya Z.; Muller, Robert J.; Huala, Eva
2012-01-01
TAIR (The Arabidopsis Information Resource) is the model organism database (MOD) for Arabidopsis thaliana, a model plant with a literature corpus of about 39 000 articles in PubMed, with over 4300 new articles added in 2011. We have developed a literature curation workflow incorporating both automated and manual elements to cope with this flood of new research articles. The current workflow can be divided into two phases: article selection and curation. Structured controlled vocabularies, such as the Gene Ontology and Plant Ontology are used to capture free text information in the literature as succinct ontology-based annotations suitable for the application of computational analysis methods. We also describe our curation platform and the use of text mining tools in our workflow. Database URL: www.arabidopsis.org PMID:23221298
Vallenet, David; Belda, Eugeni; Calteau, Alexandra; Cruveiller, Stéphane; Engelen, Stefan; Lajus, Aurélie; Le Fèvre, François; Longin, Cyrille; Mornico, Damien; Roche, David; Rouy, Zoé; Salvignol, Gregory; Scarpelli, Claude; Thil Smith, Adam Alexander; Weiman, Marion; Médigue, Claudine
2013-01-01
MicroScope is an integrated platform dedicated to both the methodical updating of microbial genome annotation and to comparative analysis. The resource provides data from completed and ongoing genome projects (automatic and expert annotations), together with data sources from post-genomic experiments (i.e. transcriptomics, mutant collections) allowing users to perfect and improve the understanding of gene functions. MicroScope (http://www.genoscope.cns.fr/agc/microscope) combines tools and graphical interfaces to analyse genomes and to perform the manual curation of gene annotations in a comparative context. Since its first publication in January 2006, the system (previously named MaGe for Magnifying Genomes) has been continuously extended both in terms of data content and analysis tools. The last update of MicroScope was published in 2009 in the Database journal. Today, the resource contains data for >1600 microbial genomes, of which ∼300 are manually curated and maintained by biologists (1200 personal accounts today). Expert annotations are continuously gathered in the MicroScope database (∼50 000 a year), contributing to the improvement of the quality of microbial genomes annotations. Improved data browsing and searching tools have been added, original tools useful in the context of expert annotation have been developed and integrated and the website has been significantly redesigned to be more user-friendly. Furthermore, in the context of the European project Microme (Framework Program 7 Collaborative Project), MicroScope is becoming a resource providing for the curation and analysis of both genomic and metabolic data. An increasing number of projects are related to the study of environmental bacterial (meta)genomes that are able to metabolize a large variety of chemical compounds that may be of high industrial interest. PMID:23193269
Text mining and expert curation to develop a database on psychiatric diseases and their genes
Gutiérrez-Sacristán, Alba; Bravo, Àlex; Portero-Tresserra, Marta; Valverde, Olga; Armario, Antonio; Blanco-Gandía, M.C.; Farré, Adriana; Fernández-Ibarrondo, Lierni; Fonseca, Francina; Giraldo, Jesús; Leis, Angela; Mané, Anna; Mayer, M.A.; Montagud-Romero, Sandra; Nadal, Roser; Ortiz, Jordi; Pavon, Francisco Javier; Perez, Ezequiel Jesús; Rodríguez-Arias, Marta; Serrano, Antonia; Torrens, Marta; Warnault, Vincent; Sanz, Ferran
2017-01-01
Abstract Psychiatric disorders constitute one of the main causes of disability worldwide. During the past years, considerable research has been conducted on the genetic architecture of such diseases, although little understanding of their etiology has been achieved. The difficulty to access up-to-date, relevant genotype-phenotype information has hampered the application of this wealth of knowledge to translational research and clinical practice in order to improve diagnosis and treatment of psychiatric patients. PsyGeNET (http://www.psygenet.org/) has been developed with the aim of supporting research on the genetic architecture of psychiatric diseases, by providing integrated and structured accessibility to their genotype–phenotype association data, together with analysis and visualization tools. In this article, we describe the protocol developed for the sustainable update of this knowledge resource. It includes the recruitment of a team of domain experts in order to perform the curation of the data extracted by text mining. Annotation guidelines and a web-based annotation tool were developed to support the curators’ tasks. A curation workflow was designed including a pilot phase and two rounds of curation and analysis phases. Negative evidence from the literature on gene–disease associations (GDAs) was taken into account in the curation process. We report the results of the application of this workflow to the curation of GDAs for PsyGeNET, including the analysis of the inter-annotator agreement and suggest this model as a suitable approach for the sustainable development and update of knowledge resources. Database URL: http://www.psygenet.org PsyGeNET corpus: http://www.psygenet.org/ds/PsyGeNET/results/psygenetCorpus.tar PMID:29220439
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
2018-03-09
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world. Textpresso Central URL: http://www.textpresso.org/tpc.
MET network in PubMed: a text-mined network visualization and curation system.
Dai, Hong-Jie; Su, Chu-Hsien; Lai, Po-Ting; Huang, Ming-Siang; Jonnagaddala, Jitendra; Rose Jue, Toni; Rao, Shruti; Chou, Hui-Jou; Milacic, Marija; Singh, Onkar; Syed-Abdul, Shabbir; Hsu, Wen-Lian
2016-01-01
Metastasis is the dissemination of a cancer/tumor from one organ to another, and it is the most dangerous stage during cancer progression, causing more than 90% of cancer deaths. Improving the understanding of the complicated cellular mechanisms underlying metastasis requires investigations of the signaling pathways. To this end, we developed a METastasis (MET) network visualization and curation tool to assist metastasis researchers retrieve network information of interest while browsing through the large volume of studies in PubMed. MET can recognize relations among genes, cancers, tissues and organs of metastasis mentioned in the literature through text-mining techniques, and then produce a visualization of all mined relations in a metastasis network. To facilitate the curation process, MET is developed as a browser extension that allows curators to review and edit concepts and relations related to metastasis directly in PubMed. PubMed users can also view the metastatic networks integrated from the large collection of research papers directly through MET. For the BioCreative 2015 interactive track (IAT), a curation task was proposed to curate metastatic networks among PubMed abstracts. Six curators participated in the proposed task and a post-IAT task, curating 963 unique metastatic relations from 174 PubMed abstracts using MET.Database URL: http://btm.tmu.edu.tw/metastasisway. © The Author(s) 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
de Buttet, Côme; Prevost, Emilie; Campo, Alain; Garnier, Philippe; Zoll, Stephane; Vallier, Laurent; Cunge, Gilles; Maury, Patrick; Massin, Thomas; Chhun, Sonarith
2017-03-01
Today the IC manufacturing faces lots of problematics linked to the continuous down scaling of printed structures. Some of those issues are related to wet processing, which are often used in the IC manufacturing flow for wafer cleaning, material etching and surface preparation. In the current work we summarize the limitations for the next nodes of wet processing such as metallic contaminations, wafer charging, corrosion and pattern collapse. As a replacement, we promoted the isotropic chemical dry etching (CDE) which is supposed to fix all the above drawbacks. Etching steps of SI3N4 layers were evaluated in order to prove the interest of such technique.
Goldweber, Scott; Theodore, Jamal; Torcivia-Rodriguez, John; Simonyan, Vahan; Mazumder, Raja
2017-01-01
Services such as Facebook, Amazon, and eBay were once solely accessed from stationary computers. These web services are now being used increasingly on mobile devices. We acknowledge this new reality by providing users a way to access publications and a curated cancer mutation database on their mobile device with daily automated updates. http://hive. biochemistry.gwu.edu/tools/HivePubcast.
VIOLIN: vaccine investigation and online information network.
Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun
2008-01-01
Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.
The BioGRID interaction database: 2013 update.
Chatr-Aryamontri, Andrew; Breitkreutz, Bobby-Joe; Heinicke, Sven; Boucher, Lorrie; Winter, Andrew; Stark, Chris; Nixon, Julie; Ramage, Lindsay; Kolas, Nadine; O'Donnell, Lara; Reguly, Teresa; Breitkreutz, Ashton; Sellam, Adnane; Chen, Daici; Chang, Christie; Rust, Jennifer; Livstone, Michael; Oughtred, Rose; Dolinski, Kara; Tyers, Mike
2013-01-01
The Biological General Repository for Interaction Datasets (BioGRID: http//thebiogrid.org) is an open access archive of genetic and protein interactions that are curated from the primary biomedical literature for all major model organism species. As of September 2012, BioGRID houses more than 500 000 manually annotated interactions from more than 30 model organisms. BioGRID maintains complete curation coverage of the literature for the budding yeast Saccharomyces cerevisiae, the fission yeast Schizosaccharomyces pombe and the model plant Arabidopsis thaliana. A number of themed curation projects in areas of biomedical importance are also supported. BioGRID has established collaborations and/or shares data records for the annotation of interactions and phenotypes with most major model organism databases, including Saccharomyces Genome Database, PomBase, WormBase, FlyBase and The Arabidopsis Information Resource. BioGRID also actively engages with the text-mining community to benchmark and deploy automated tools to expedite curation workflows. BioGRID data are freely accessible through both a user-defined interactive interface and in batch downloads in a wide variety of formats, including PSI-MI2.5 and tab-delimited files. BioGRID records can also be interrogated and analyzed with a series of new bioinformatics tools, which include a post-translational modification viewer, a graphical viewer, a REST service and a Cytoscape plugin.
Overview of the interactive task in BioCreative V.
Wang, Qinghua; S Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K; Jensen, Lars J; Jimenez, Silvia; Jue, Toni R; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J F; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H; Hirschman, Lynette; Arighi, Cecilia N
2016-01-01
Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested.Database URL: http://www.biocreative.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Using social media to support small group learning.
Cole, Duncan; Rengasamy, Emma; Batchelor, Shafqat; Pope, Charles; Riley, Stephen; Cunningham, Anne Marie
2017-11-10
Medical curricula are increasingly using small group learning and less didactic lecture-based teaching. This creates new challenges and opportunities in how students are best supported with information technology. We explored how university-supported and external social media could support collaborative small group working on our new undergraduate medical curriculum. We made available a curation platform (Scoop.it) and a wiki within our virtual learning environment as part of year 1 Case-Based Learning, and did not discourage the use of other tools such as Facebook. We undertook student surveys to capture perceptions of the tools and information on how they were used, and employed software user metrics to explore the extent to which they were used during the year. Student groups developed a preferred way of working early in the course. Most groups used Facebook to facilitate communication within the group, and to host documents and notes. There were more barriers to using the wiki and curation platform, although some groups did make extensive use of them. Staff engagement was variable, with some tutors reviewing the content posted on the wiki and curation platform in face-to-face sessions, but not outside these times. A small number of staff posted resources and reviewed student posts on the curation platform. Optimum use of these tools depends on sufficient training of both staff and students, and an opportunity to practice using them, with ongoing support. The platforms can all support collaborative learning, and may help develop digital literacy, critical appraisal skills, and awareness of wider health issues in society.
Sahu, P K; Das, G K; Malik, Aman; Biakthangi, Laura
2015-01-01
The purpose was to study dry eye following phacoemulsification surgery and analyze its relation to associated intra-operative risk factors. A prospective observational study was carried out on 100 eyes of 100 patients without preoperative dry eye. Schirmer's Test I, tear meniscus height, tear break-up time, and lissamine green staining of cornea and conjunctiva were performed preoperatively and at 5 days, 10 days, 1-month, and 2 months after phacoemulsification surgery, along with the assessment of subjective symptoms, using the dry eye questionnaire. The correlations between these values and the operating microscope light exposure time along with the cumulative dissipated energy (CDE) were investigated. There was a significant deterioration of all dry eye test values following phacoemulsification surgery along with an increase in subjective symptoms. These values started improving after 1-month postoperatively, but preoperative levels were not achieved till 2 months after surgery. Correlations of dry eye test values were noted with the operating microscope light exposure time and CDE, but they were not significant. Phacoemulsification surgery is capable of inducing dry eye, and patients should be informed accordingly prior to surgery. The clinician should also be cognizant that increased CDE can induce dry eyes even in eyes that were healthy preoperatively. In addition, intraoperative exposure to the microscopic light should be minimized.
Non-dispersive carrier transport in molecularly doped polymers and the convection-diffusion equation
NASA Astrophysics Data System (ADS)
Tyutnev, A. P.; Parris, P. E.; Saenko, V. S.
2015-08-01
We reinvestigate the applicability of the concept of trap-free carrier transport in molecularly doped polymers and the possibility of realistically describing time-of-flight (TOF) current transients in these materials using the classical convection-diffusion equation (CDE). The problem is treated as rigorously as possible using boundary conditions appropriate to conventional time of flight experiments. Two types of pulsed carrier generation are considered. In addition to the traditional case of surface excitation, we also consider the case where carrier generation is spatially uniform. In our analysis, the front electrode is treated as a reflecting boundary, while the counter electrode is assumed to act either as a neutral contact (not disturbing the current flow) or as an absorbing boundary at which the carrier concentration vanishes. As expected, at low fields transient currents exhibit unusual behavior, as diffusion currents overwhelm drift currents to such an extent that it becomes impossible to determine transit times (and hence, carrier mobilities). At high fields, computed transients are more like those typically observed, with well-defined plateaus and sharp transit times. Careful analysis, however, reveals that the non-dispersive picture, and predictions of the CDE contradict both experiment and existing disorder-based theories in important ways, and that the CDE should be applied rather cautiously, and even then only for engineering purposes.
Regulation of the Min Cell Division Inhibition Complex by the Rcs Phosphorelay in Proteus mirabilis.
Howery, Kristen E; Clemmer, Katy M; Şimşek, Emrah; Kim, Minsu; Rather, Philip N
2015-08-01
A key regulator of swarming in Proteus mirabilis is the Rcs phosphorelay, which represses flhDC, encoding the master flagellar regulator FlhD4C2. Mutants in rcsB, the response regulator in the Rcs phosphorelay, hyperswarm on solid agar and differentiate into swarmer cells in liquid, demonstrating that this system also influences the expression of genes central to differentiation. To gain a further understanding of RcsB-regulated genes involved in swarmer cell differentiation, transcriptome sequencing (RNA-Seq) was used to examine the RcsB regulon. Among the 133 genes identified, minC and minD, encoding cell division inhibitors, were identified as RcsB-activated genes. A third gene, minE, was shown to be part of an operon with minCD. To examine minCDE regulation, the min promoter was identified by 5' rapid amplification of cDNA ends (5'-RACE), and both transcriptional lacZ fusions and quantitative real-time reverse transcriptase (qRT) PCR were used to confirm that the minCDE operon was RcsB activated. Purified RcsB was capable of directly binding the minC promoter region. To determine the role of RcsB-mediated activation of minCDE in swarmer cell differentiation, a polar minC mutation was constructed. This mutant formed minicells during growth in liquid, produced shortened swarmer cells during differentiation, and exhibited decreased swarming motility. This work describes the regulation and role of the MinCDE cell division system in P. mirabilis swarming and swarmer cell elongation. Prior to this study, the mechanisms that inhibit cell division and allow swarmer cell elongation were unknown. In addition, this work outlines for the first time the RcsB regulon in P. mirabilis. Taken together, the data presented in this study begin to address how P. mirabilis elongates upon contact with a solid surface. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Nora: A Vocabulary Discovery Tool for Concept Extraction.
Divita, Guy; Carter, Marjorie E; Durgahee, B S Begum; Pettey, Warren E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V
2015-01-01
Coverage of terms in domain-specific terminologies and ontologies is often limited in controlled medical vocabularies. Creating and augmenting such terminologies is resource intensive. We developed Nora as an interactive tool to discover terminology from text corpora; the output can then be employed to refine and enhance natural language processing-based concept extraction tasks. Nora provides a visualization of chains of words foraged from word frequency indexes from a text corpus. Domain experts direct and curate chains that contain relevant terms, which are further curated to identify lexical variants. A test of Nora demonstrated an increase of a domain lexicon in homelessness and related psychosocial factors by 38%, yielding an additional 10% extracted concepts.
Curation of US Martian Meteorites Collected in Antarctica
NASA Technical Reports Server (NTRS)
Lindstrom, M.; Satterwhite, C.; Allton, J.; Stansbury, E.
1998-01-01
To date the ANSMET field team has collected five martian meteorites (see below) in Antarctica and returned them for curation at the Johnson Space Center (JSC) Meteorite Processing Laboratory (MPL). ne meteorites were collected with the clean procedures used by ANSMET in collecting all meteorites: They were handled with JSC-cleaned tools, packaged in clean bags, and shipped frozen to JSC. The five martian meteorites vary significantly in size (12-7942 g) and rock type (basalts, lherzolites, and orthopyroxenite). Detailed descriptions are provided in the Mars Meteorite compendium, which describes classification, curation and research results. A table gives the names, classifications and original and curatorial masses of the martian meteorites. The MPL and measures for contamination control are described.
NASA Astrophysics Data System (ADS)
Hou, C. Y.; Dattore, R.; Peng, G. S.
2014-12-01
The National Center for Atmospheric Research's Global Climate Four-Dimensional Data Assimilation (CFDDA) Hourly 40km Reanalysis dataset is a dynamically downscaled dataset with high temporal and spatial resolution. The dataset contains three-dimensional hourly analyses in netCDF format for the global atmospheric state from 1985 to 2005 on a 40km horizontal grid (0.4°grid increment) with 28 vertical levels, providing good representation of local forcing and diurnal variation of processes in the planetary boundary layer. This project aimed to make the dataset publicly available, accessible, and usable in order to provide a unique resource to allow and promote studies of new climate characteristics. When the curation project started, it had been five years since the data files were generated. Also, although the Principal Investigator (PI) had generated a user document at the end of the project in 2009, the document had not been maintained. Furthermore, the PI had moved to a new institution, and the remaining team members were reassigned to other projects. These factors made data curation in the areas of verifying data quality, harvest metadata descriptions, documenting provenance information especially challenging. As a result, the project's curation process found that: Data curator's skill and knowledge helped make decisions, such as file format and structure and workflow documentation, that had significant, positive impact on the ease of the dataset's management and long term preservation. Use of data curation tools, such as the Data Curation Profiles Toolkit's guidelines, revealed important information for promoting the data's usability and enhancing preservation planning. Involving data curators during each stage of the data curation life cycle instead of at the end could improve the curation process' efficiency. Overall, the project showed that proper resources invested in the curation process would give datasets the best chance to fulfill their potential to help with new climate pattern discovery.
McNeil, Leslie Klis; Reich, Claudia; Aziz, Ramy K; Bartels, Daniela; Cohoon, Matthew; Disz, Terry; Edwards, Robert A; Gerdes, Svetlana; Hwang, Kaitlyn; Kubal, Michael; Margaryan, Gohar Rem; Meyer, Folker; Mihalo, William; Olsen, Gary J; Olson, Robert; Osterman, Andrei; Paarmann, Daniel; Paczian, Tobias; Parrello, Bruce; Pusch, Gordon D; Rodionov, Dmitry A; Shi, Xinghua; Vassieva, Olga; Vonstein, Veronika; Zagnitko, Olga; Xia, Fangfang; Zinner, Jenifer; Overbeek, Ross; Stevens, Rick
2007-01-01
The National Microbial Pathogen Data Resource (NMPDR) (http://www.nmpdr.org) is a National Institute of Allergy and Infections Disease (NIAID)-funded Bioinformatics Resource Center that supports research in selected Category B pathogens. NMPDR contains the complete genomes of approximately 50 strains of pathogenic bacteria that are the focus of our curators, as well as >400 other genomes that provide a broad context for comparative analysis across the three phylogenetic Domains. NMPDR integrates complete, public genomes with expertly curated biological subsystems to provide the most consistent genome annotations. Subsystems are sets of functional roles related by a biologically meaningful organizing principle, which are built over large collections of genomes; they provide researchers with consistent functional assignments in a biologically structured context. Investigators can browse subsystems and reactions to develop accurate reconstructions of the metabolic networks of any sequenced organism. NMPDR provides a comprehensive bioinformatics platform, with tools and viewers for genome analysis. Results of precomputed gene clustering analyses can be retrieved in tabular or graphic format with one-click tools. NMPDR tools include Signature Genes, which finds the set of genes in common or that differentiates two groups of organisms. Essentiality data collated from genome-wide studies have been curated. Drug target identification and high-throughput, in silico, compound screening are in development.
Wu, Honghan; Oellrich, Anika; Girges, Christine; de Bono, Bernard; Hubbard, Tim J P; Dobson, Richard J B
2017-01-01
Neurodegenerative disorders such as Parkinson's and Alzheimer's disease are devastating and costly illnesses, a source of major global burden. In order to provide successful interventions for patients and reduce costs, both causes and pathological processes need to be understood. The ApiNATOMY project aims to contribute to our understanding of neurodegenerative disorders by manually curating and abstracting data from the vast body of literature amassed on these illnesses. As curation is labour-intensive, we aimed to speed up the process by automatically highlighting those parts of the PDF document of primary importance to the curator. Using techniques similar to those of summarisation, we developed an algorithm that relies on linguistic, semantic and spatial features. Employing this algorithm on a test set manually corrected for tool imprecision, we achieved a macro F 1 -measure of 0.51, which is an increase of 132% compared to the best bag-of-words baseline model. A user based evaluation was also conducted to assess the usefulness of the methodology on 40 unseen publications, which reveals that in 85% of cases all highlighted sentences are relevant to the curation task and in about 65% of the cases, the highlights are sufficient to support the knowledge curation task without needing to consult the full text. In conclusion, we believe that these are promising results for a step in automating the recognition of curation-relevant sentences. Refining our approach to pre-digest papers will lead to faster processing and cost reduction in the curation process. https://github.com/KHP-Informatics/NapEasy. © The Author(s) 2017. Published by Oxford University Press.
Oellrich, Anika; Girges, Christine; de Bono, Bernard; Hubbard, Tim J.P.; Dobson, Richard J.B.
2017-01-01
Abstract Neurodegenerative disorders such as Parkinson’s and Alzheimer’s disease are devastating and costly illnesses, a source of major global burden. In order to provide successful interventions for patients and reduce costs, both causes and pathological processes need to be understood. The ApiNATOMY project aims to contribute to our understanding of neurodegenerative disorders by manually curating and abstracting data from the vast body of literature amassed on these illnesses. As curation is labour-intensive, we aimed to speed up the process by automatically highlighting those parts of the PDF document of primary importance to the curator. Using techniques similar to those of summarisation, we developed an algorithm that relies on linguistic, semantic and spatial features. Employing this algorithm on a test set manually corrected for tool imprecision, we achieved a macro F1-measure of 0.51, which is an increase of 132% compared to the best bag-of-words baseline model. A user based evaluation was also conducted to assess the usefulness of the methodology on 40 unseen publications, which reveals that in 85% of cases all highlighted sentences are relevant to the curation task and in about 65% of the cases, the highlights are sufficient to support the knowledge curation task without needing to consult the full text. In conclusion, we believe that these are promising results for a step in automating the recognition of curation-relevant sentences. Refining our approach to pre-digest papers will lead to faster processing and cost reduction in the curation process. Database URL: https://github.com/KHP-Informatics/NapEasy PMID:28365743
Curation of inhibitor-target data: process and impact on pathway analysis.
Devidas, Sreenivas
2009-01-01
The past decade has seen a significant emergence in the availability and use of pathway analysis tools. The workflow that is supported by most of the pathway analysis tools is limited to either of the following: a. a network of genes based on the input data set, or b. the resultant network filtered down by a few criteria such as (but not limited to) i. disease association of the genes in the network; ii. targets known to be the target of one or more launched drugs; iii. targets known to be the target of one or more compounds in clinical trials; and iv. targets reasonably known to be potential candidate or clinical biomarkers. Almost all the tools in use today are biased towards the biological side and contain little, if any, information on the chemical inhibitors associated with the components of a given biological network. The limitation resides as follows: The fact that the number of inhibitors that have been published or patented is probably several fold (probably greater than 10-fold) more than the number of published protein-protein interactions. Curation of such data is both expensive and time consuming and could impact ROI significantly. The non-standardization associated with protein and gene names makes mapping reasonably non-straightforward. The number of patented and published inhibitors across target classes increases by over a million per year. Therefore, keeping the databases current becomes a monumental problem. Modifications required in the product architectures to accommodate chemistry-related content. GVK Bio has, over the past 7 years, curated the compound-target data that is necessary for the addition of such compound-centric workflows. This chapter focuses on identification, curation and utility of such data.
Curating NASA's Past, Present, and Future Astromaterial Sample Collections
NASA Technical Reports Server (NTRS)
Zeigler, R. A.; Allton, J. H.; Evans, C. A.; Fries, M. D.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.; Zolensky, M.; Stansbery, E. K.
2016-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (hereafter JSC curation) is responsible for curating all of NASA's extraterrestrial samples. JSC presently curates 9 different astromaterials collections in seven different clean-room suites: (1) Apollo Samples (ISO (International Standards Organization) class 6 + 7); (2) Antarctic Meteorites (ISO 6 + 7); (3) Cosmic Dust Particles (ISO 5); (4) Microparticle Impact Collection (ISO 7; formerly called Space-Exposed Hardware); (5) Genesis Solar Wind Atoms (ISO 4); (6) Stardust Comet Particles (ISO 5); (7) Stardust Interstellar Particles (ISO 5); (8) Hayabusa Asteroid Particles (ISO 5); (9) OSIRIS-REx Spacecraft Coupons and Witness Plates (ISO 7). Additional cleanrooms are currently being planned to house samples from two new collections, Hayabusa 2 (2021) and OSIRIS-REx (2023). In addition to the labs that house the samples, we maintain a wide variety of infra-structure facilities required to support the clean rooms: HEPA-filtered air-handling systems, ultrapure dry gaseous nitrogen systems, an ultrapure water system, and cleaning facilities to provide clean tools and equipment for the labs. We also have sample preparation facilities for making thin sections, microtome sections, and even focused ion-beam sections. We routinely monitor the cleanliness of our clean rooms and infrastructure systems, including measurements of inorganic or organic contamination, weekly airborne particle counts, compositional and isotopic monitoring of liquid N2 deliveries, and daily UPW system monitoring. In addition to the physical maintenance of the samples, we track within our databases the current and ever changing characteristics (weight, location, etc.) of more than 250,000 individually numbered samples across our various collections, as well as more than 100,000 images, and countless "analog" records that record the sample processing records of each individual sample. JSC Curation is co-located with JSC's Astromaterials Research Office, which houses a world-class suite of analytical instrumentation and scientists. We leverage these labs and personnel to better curate the samples. Part of the cu-ration process is planning for the future, and we refer to these planning efforts as "advanced curation". Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envi-sioned by NASA exploration goals. We are (and have been) planning for future cu-ration, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, and curation of organically- and biologically-sensitive samples.
Culto: AN Ontology-Based Annotation Tool for Data Curation in Cultural Heritage
NASA Astrophysics Data System (ADS)
Garozzo, R.; Murabito, F.; Santagati, C.; Pino, C.; Spampinato, C.
2017-08-01
This paper proposes CulTO, a software tool relying on a computational ontology for Cultural Heritage domain modelling, with a specific focus on religious historical buildings, for supporting cultural heritage experts in their investigations. It is specifically thought to support annotation, automatic indexing, classification and curation of photographic data and text documents of historical buildings. CULTO also serves as a useful tool for Historical Building Information Modeling (H-BIM) by enabling semantic 3D data modeling and further enrichment with non-geometrical information of historical buildings through the inclusion of new concepts about historical documents, images, decay or deformation evidence as well as decorative elements into BIM platforms. CulTO is the result of a joint research effort between the Laboratory of Surveying and Architectural Photogrammetry "Luigi Andreozzi" and the PeRCeiVe Lab (Pattern Recognition and Computer Vision Lab) of the University of Catania,
Pathway Tools version 13.0: integrated software for pathway/genome informatics and systems biology
Paley, Suzanne M.; Krummenacker, Markus; Latendresse, Mario; Dale, Joseph M.; Lee, Thomas J.; Kaipa, Pallavi; Gilham, Fred; Spaulding, Aaron; Popescu, Liviu; Altman, Tomer; Paulsen, Ian; Keseler, Ingrid M.; Caspi, Ron
2010-01-01
Pathway Tools is a production-quality software environment for creating a type of model-organism database called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc integrates the evolving understanding of the genes, proteins, metabolic network and regulatory network of an organism. This article provides an overview of Pathway Tools capabilities. The software performs multiple computational inferences including prediction of metabolic pathways, prediction of metabolic pathway hole fillers and prediction of operons. It enables interactive editing of PGDBs by DB curators. It supports web publishing of PGDBs, and provides a large number of query and visualization tools. The software also supports comparative analyses of PGDBs, and provides several systems biology analyses of PGDBs including reachability analysis of metabolic networks, and interactive tracing of metabolites through a metabolic network. More than 800 PGDBs have been created using Pathway Tools by scientists around the world, many of which are curated DBs for important model organisms. Those PGDBs can be exchanged using a peer-to-peer DB sharing system called the PGDB Registry. PMID:19955237
Configuration Management File Manager Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
1997-01-01
One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.
Mulcahey, M J; Vogel, L C; Sheikh, M; Arango-Lasprilla, J C; Augutis, M; Garner, E; Hagen, E M; Jakeman, L B; Kelly, E; Martin, R; Odenkirchen, J; Scheel-Sailer, A; Schottler, J; Taylor, H; Thielen, C C; Zebracki, K
2017-04-01
In 2014, the adult spinal cord injury (SCI) common data element (CDE) recommendations were made available. This project was a review of the adult SCI CDE for relevance to children and youth with SCI. The objective of this study was to review the National Institute of Neurologic Disorders and Stroke (NINDS) adult SCI CDEs for relevance to children and youth with SCI. International. The pediatric working group consisted of international members with varied fields of expertise related to pediatric SCI. The group convened biweekly meetings for 6 months in 2015. All of the adult SCI CDEs were reviewed, evaluated and modified/created for four age groups: 0-5 years, 6-12 years, 13-15 years and 16-18 years. Whenever possible, results of published research studies were used to guide recommendations. In the absence of empirical support, grey literature and international content expert consensus were garnered. Existing pediatric NINDS CDEs and new CDEs were developed in areas where adult recommendations were not appropriate. After internal working group review of domain recommendations, these pediatric CDEs were vetted during a public review from November through December 2015. Version 1.0 of the pediatric SCI CDEs was posted in February 2016. The pediatric SCI CDEs are incorporated directly into the NINDS SCI CDE sets and can be found at https://commondataelements.ninds.nih.gov.
A pivotal role of BEX1 in liver progenitor cell expansion in mice.
Gu, Yuting; Wei, Weiting; Cheng, Yiji; Wan, Bing; Ding, Xinyuan; Wang, Hui; Zhang, Yanyun; Jin, Min
2018-06-15
The activation and expansion of bipotent liver progenitor cells (LPCs) are indispensable for liver regeneration after severe or chronic liver injury. However, the underlying molecular mechanisms regulating LPCs and LPC-mediated liver regeneration remain elusive. Hepatic brain-expressed X-linked 1 (BEX1) expression was evaluated using microarray screening, real-time polymerase chain reaction, immunoblotting and immunofluorescence. LPC activation and liver injury were studied following a choline-deficient, ethionine-supplemented (CDE) diet in wild-type (WT) and Bex1 -/- mice. Proliferation, apoptosis, colony formation and hepatic differentiation were examined in LPCs from WT and Bex1 -/- mice. Peroxisome proliferator-activated receptor gamma was detected in Bex1-deficient LPCs and mouse livers, and was silenced to analyse the expansion of LPCs from WT and Bex1 -/- mice. Hepatic BEX1 expression was increased during CDE diet-induced liver injury and was highly elevated primarily in LPCs. Bex1 -/- mice fed a CDE diet displayed impaired LPC expansion and liver regeneration. Bex1 deficiency inhibited LPC proliferation and enhanced LPC apoptosis in vitro. Additionally, Bex1 deficiency inhibited the colony formation of LPCs but had no effect on their hepatic differentiation. Mechanistically, BEX1 inhibited peroxisome proliferator-activated receptor gamma to promote LPC expansion. Our findings indicate that BEX1 plays a pivotal role in LPC activation and expansion during liver regeneration, potentially providing novel targets for liver regeneration and chronic liver disease therapies.
McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta
2016-01-01
BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Snead, C. J.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.
2018-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center has established an Advanced Curation program that is tasked with developing procedures, technologies, and data sets necessary for the curation of future astromaterials collections as envisioned by NASA exploration goals. One particular objective of the Advanced Curation program is the development of new methods for the collection, storage, handling and characterization of small (less than 100 micrometer) particles. Astromaterials Curation currently maintains four small particle collections: Cosmic Dust that has been collected in Earth's stratosphere by ER2 and WB-57 aircraft, Comet 81P/Wild 2 dust returned by NASA's Stardust spacecraft, interstellar dust that was returned by Stardust, and asteroid Itokawa particles that were returned by the JAXA's Hayabusa spacecraft. NASA Curation is currently preparing for the anticipated return of two new astromaterials collections - asteroid Ryugu regolith to be collected by Hayabusa2 spacecraft in 2021 (samples will be provided by JAXA as part of an international agreement), and asteroid Bennu regolith to be collected by the OSIRIS-REx spacecraft and returned in 2023. A substantial portion of these returned samples are expected to consist of small particle components, and mission requirements necessitate the development of new processing tools and methods in order to maximize the scientific yield from these valuable acquisitions. Here we describe initial progress towards the development of applicable sample handling methods for the successful curation of future small particle collections.
Goblirsch, Brandon; Kurker, Richard C.; Streit, Bennett R.; Wilmot, Carrie M.; DuBois, Jennifer L.
2011-01-01
Heme proteins are extremely diverse, widespread, and versatile biocatalysts, sensors, and molecular transporters. The chlorite dismutase family of hemoproteins received its name due to the ability of the first-isolated members to detoxify anthropogenic ClO2−, a function believed to have evolved only in the last few decades. Family members have since been found in fifteen bacterial and archaeal genera, suggesting ancient roots. A structure- and sequence-based examination of the family is presented, in which key sequence and structural motifs are identified and possible functions for family proteins are proposed. Newly identified structural homologies moreover demonstrate clear connections to two other large, ancient, and functionally mysterious protein families. We propose calling them collectively the CDE superfamily of heme proteins. PMID:21354424
Rocca-Serra, Philippe; Brandizi, Marco; Maguire, Eamonn; Sklyar, Nataliya; Taylor, Chris; Begley, Kimberly; Field, Dawn; Harris, Stephen; Hide, Winston; Hofmann, Oliver; Neumann, Steffen; Sterk, Peter; Tong, Weida; Sansone, Susanna-Assunta
2010-01-01
Summary: The first open source software suite for experimentalists and curators that (i) assists in the annotation and local management of experimental metadata from high-throughput studies employing one or a combination of omics and other technologies; (ii) empowers users to uptake community-defined checklists and ontologies; and (iii) facilitates submission to international public repositories. Availability and Implementation: Software, documentation, case studies and implementations at http://www.isa-tools.org Contact: isatools@googlegroups.com PMID:20679334
Standards-based curation of a decade-old digital repository dataset of molecular information.
Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Murray-Rust, Peter; Rzepa, Henry S; Stewart, James J P
2015-01-01
The desirable curation of 158,122 molecular geometries derived from the NCI set of reference molecules together with associated properties computed using the MOPAC semi-empirical quantum mechanical method and originally deposited in 2005 into the Cambridge DSpace repository as a data collection is reported. The procedures involved in the curation included annotation of the original data using new MOPAC methods, updating the syntax of the CML documents used to express the data to ensure schema conformance and adding new metadata describing the entries together with a XML schema transformation to map the metadata schema to that used by the DataCite organisation. We have adopted a granularity model in which a DataCite persistent identifier (DOI) is created for each individual molecule to enable data discovery and data metrics at this level using DataCite tools. We recommend that the future research data management (RDM) of the scientific and chemical data components associated with journal articles (the "supporting information") should be conducted in a manner that facilitates automatic periodic curation. Graphical abstractStandards and metadata-based curation of a decade-old digital repository dataset of molecular information.
Quality Assurance of Cancer Study Common Data Elements Using A Post-Coordination Approach
Jiang, Guoqian; Solbrig, Harold R.; Prud’hommeaux, Eric; Tao, Cui; Weng, Chunhua; Chute, Christopher G.
2015-01-01
Domain-specific common data elements (CDEs) are emerging as an effective approach to standards-based clinical research data storage and retrieval. A limiting factor, however, is the lack of robust automated quality assurance (QA) tools for the CDEs in clinical study domains. The objectives of the present study are to prototype and evaluate a QA tool for the study of cancer CDEs using a post-coordination approach. The study starts by integrating the NCI caDSR CDEs and The Cancer Genome Atlas (TCGA) data dictionaries in a single Resource Description Framework (RDF) data store. We designed a compositional expression pattern based on the Data Element Concept model structure informed by ISO/IEC 11179, and developed a transformation tool that converts the pattern-based compositional expressions into the Web Ontology Language (OWL) syntax. Invoking reasoning and explanation services, we tested the system utilizing the CDEs extracted from two TCGA clinical cancer study domains. The system could automatically identify duplicate CDEs, and detect CDE modeling errors. In conclusion, compositional expressions not only enable reuse of existing ontology codes to define new domain concepts, but also provide an automated mechanism for QA of terminological annotations for CDEs. PMID:26958201
Tjan-Heijnen, V C; Postmus, P E; Ardizzoni, A; Manegold, C H; Burghouts, J; van Meerbeeck, J; Gans, S; Mollers, M; Buchholz, E; Biesma, B; Legrand, C; Debruyne, C; Giaccone, G
2001-10-01
CDE (cyclophosphamide, doxorubicin, etoposide) is one of the standard chemotherapy regimens in the treatment of small-cell lung cancer (SCLC), with myelosuppression as dose-limiting toxicity. In this trial the impact of prophylactic antibiotics on incidence of febrile leucopenia (FL) during chemotherapy for SCLC was evaluated. Patients with chemo-naïve SCLC were randomized to standard-dose CDE (C 1,000 mg/m2 day 1, D 45 mg/m2 day 1, E 100 mg/m2 days 1-3. i.v., q 3 weeks, x5) or to intensified CDE chemotherapy (125% dose, q 2 weeks, x4, with filgrastim 5 microg/kg/day days 4-13) to assess the impact on survival (n = 240 patients). Patients were also randomized to prophylactic antibiotics (ciprofloxacin 750 mg plus roxithromycin 150 mg, bid. days 4-13) or to placebo in a 2 x 2 factorial design (first 163 patients). This manuscript focuses on the antibiotics question. The incidence of FL during the first cycle was 25% of patients in the placebo and 11% in the antibiotics arm (P = 0.010; 1-sided), with an overall incidence through all cycles of 43% vs. 24% respectively (P = 0.007; 1-sided). There were less Gram-positive (12 vs. 4), Gram-negative (20 vs. 5) and clinically documented (38 vs. 15) infections in the antibiotics arm. The use of therapeutic antibiotics was reduced (P = 0.013; 1-sided), with less hospitalizations due to FL (31 vs. 17 patients, P = 0.013: 1-sided). However, the overall number of days of hospitalization was not reduced (P = 0.05; 1-sided). The number of infectious deaths was nil in the antibiotics vs. five (6%) in the placebo arm (P = 0.022; 2-sided). Prophylactic ciprofloxacin plus roxithromycin during CDE chemotherapy reduced the incidence of FL, the number of infections, the use of therapeutic antibiotics and hospitalizations due to FL by approximately 50%, with reduced number of infectious deaths. For patients with similar risk for FL, the prophylactic use of antibiotics should be considered.
Clinical study using a new phacoemulsification system with surgical intraocular pressure control.
Solomon, Kerry D; Lorente, Ramón; Fanney, Doug; Cionni, Robert J
2016-04-01
To compare cumulative dissipated energy (CDE), aspiration fluid used, and aspiration time during phacoemulsification cataract extraction using 2 surgical configurations. Two clinical sites in the United States and 1 in Spain. Prospective randomized clinical case series. For each patient, the first eye having surgery was randomized to the active-fluidics configuration (Centurion Vision System with Active Fluidics, 0.9 mm 45-degree Intrepid Balanced tip, and 0.9 mm Intrepid Ultra infusion sleeve) or the gravity-fluidics configuration (Infiniti Vision System with gravity fluidics, 0.9 mm 45-degree Mini-Flared Kelman tip, and 0.9 mm Ultra infusion sleeve). Second-eye surgery was completed within 14 days after first-eye surgery using the alternate configuration. The CDE, aspiration fluid used, and aspiration time were compared between configurations, and adverse events were summarized. Patient demographics and cataract characteristics were similar between configurations (100 per group). The CDE was significantly lower with the active-fluidics configuration than with the gravity-fluidics configuration (mean ± standard error, 4.32 ± 0.28 percent-seconds) (P < .001). The active-fluidics configuration used significantly less aspiration fluid than the gravity-fluidics configuration (mean 46.56 ± 1.39 mL versus 52.68 ± 1.40 mL) (P < .001) and required significantly shorter aspiration time (mean 151.9 ± 4.1 seconds versus 167.6 ± 4.1 seconds) (P < .001). No serious ocular adverse events related to the study devices or device deficiencies were observed. Significantly less CDE, aspiration fluid used, and aspiration time were observed with the active-fluidics configuration than with the gravity-fluidics configuration, showing improved surgical efficiency. Drs. Solomon and Cionni are consultants to Alcon Research, Ltd., and received compensation for conduct of the study. Dr. Lorente received compensation for clinical work in the study. Mr. Fanney is an employee of Alcon Research, Ltd. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Grimberg, Hagit; Zaltsman, Ilona; Lupu-Meiri, Monica; Gershengorn, Marvin C; Oron, Yoram
1999-01-01
C335Stop is a constitutively active mutant of the TRH receptor (TRH-R). To investigate the mechanism of the decreased responsiveness of C335Stop TRH-R, we studied cellular Ca2+ concentrations ([Ca2+]i) in AtT20 cells stably transfected with C335Stop TRH-R cDNA, or Ca2+-activated chloride currents in Xenopus laevis oocytes expressing this mutant receptor after injection of cRNA. The competitive TRH-R binding antagonist, chlorodiazepoxide (CDE), was used as an inverse agonist to study the contribution of constitutive activity to desensitization. Acute treatment with CDE resulted in a rapid (within minutes) decrease in [Ca2+]i and an increase in the response amplitude to TRH with no measurable change in receptor density. Conversely, removal of chronically administered CDE caused a rapid increase in [Ca2+]i and a decrease in TRH response amplitude. CDE abolished heterologous desensitization induced by C335Stop TRH-R on muscarinic m1-receptor (m1-R) co-expressed in Xenopus oocytes. Chelation of extracellular calcium with EGTA caused a rapid decrease in [Ca2+]i and a concomitant increase in the response to TRH in AtT20 cells expressing C335Stop TRH-Rs. Chelerythrine, a specific inhibitor of protein kinase C (PKC), reversed the heterologous desensitization of the response to acetylcholine (ACh). The phosphoserine/phosphothreonine phosphatase inhibitor, okadaic acid, abolished the effect of chelerythrine. Down-regulation of PKC by chronic exposure to phorbol 12-myristate 13-acetate (PMA) or acute inhibition with chelerythrine caused a partial resensitization of the response to TRH. Western analysis indicated that the α subtype of protein kinase C was down-regulated in cells expressing C335Stop TRH-Rs. Following a 5 min exposure to PMA, the residual αPKC translocated to the particular fraction. We propose that cells expressing the constitutively active mutant TRH-R rapidly desensitize their response, utilizing a mechanism mediated by an increase in [Ca2+]i and PKC. PMID:10204996
Managing biological networks by using text mining and computer-aided curation
NASA Astrophysics Data System (ADS)
Yu, Seok Jong; Cho, Yongseong; Lee, Min-Ho; Lim, Jongtae; Yoo, Jaesoo
2015-11-01
In order to understand a biological mechanism in a cell, a researcher should collect a huge number of protein interactions with experimental data from experiments and the literature. Text mining systems that extract biological interactions from papers have been used to construct biological networks for a few decades. Even though the text mining of literature is necessary to construct a biological network, few systems with a text mining tool are available for biologists who want to construct their own biological networks. We have developed a biological network construction system called BioKnowledge Viewer that can generate a biological interaction network by using a text mining tool and biological taggers. It also Boolean simulation software to provide a biological modeling system to simulate the model that is made with the text mining tool. A user can download PubMed articles and construct a biological network by using the Multi-level Knowledge Emergence Model (KMEM), MetaMap, and A Biomedical Named Entity Recognizer (ABNER) as a text mining tool. To evaluate the system, we constructed an aging-related biological network that consist 9,415 nodes (genes) by using manual curation. With network analysis, we found that several genes, including JNK, AP-1, and BCL-2, were highly related in aging biological network. We provide a semi-automatic curation environment so that users can obtain a graph database for managing text mining results that are generated in the server system and can navigate the network with BioKnowledge Viewer, which is freely available at http://bioknowledgeviewer.kisti.re.kr.
PomBase: a comprehensive online resource for fission yeast
Wood, Valerie; Harris, Midori A.; McDowall, Mark D.; Rutherford, Kim; Vaughan, Brendan W.; Staines, Daniel M.; Aslett, Martin; Lock, Antonia; Bähler, Jürg; Kersey, Paul J.; Oliver, Stephen G.
2012-01-01
PomBase (www.pombase.org) is a new model organism database established to provide access to comprehensive, accurate, and up-to-date molecular data and biological information for the fission yeast Schizosaccharomyces pombe to effectively support both exploratory and hypothesis-driven research. PomBase encompasses annotation of genomic sequence and features, comprehensive manual literature curation and genome-wide data sets, and supports sophisticated user-defined queries. The implementation of PomBase integrates a Chado relational database that houses manually curated data with Ensembl software that supports sequence-based annotation and web access. PomBase will provide user-friendly tools to promote curation by experts within the fission yeast community. This will make a key contribution to shaping its content and ensuring its comprehensiveness and long-term relevance. PMID:22039153
Powers, Christina M; Hoover, Mark D; Harper, Stacey L
2015-01-01
Summary The Nanomaterial Data Curation Initiative (NDCI), a project of the National Cancer Informatics Program Nanotechnology Working Group (NCIP NanoWG), explores the critical aspect of data curation within the development of informatics approaches to understanding nanomaterial behavior. Data repositories and tools for integrating and interrogating complex nanomaterial datasets are gaining widespread interest, with multiple projects now appearing in the US and the EU. Even in these early stages of development, a single common aspect shared across all nanoinformatics resources is that data must be curated into them. Through exploration of sub-topics related to all activities necessary to enable, execute, and improve the curation process, the NDCI will provide a substantive analysis of nanomaterial data curation itself, as well as a platform for multiple other important discussions to advance the field of nanoinformatics. This article outlines the NDCI project and lays the foundation for a series of papers on nanomaterial data curation. The NDCI purpose is to: 1) present and evaluate the current state of nanomaterial data curation across the field on multiple specific data curation topics, 2) propose ways to leverage and advance progress for both individual efforts and the nanomaterial data community as a whole, and 3) provide opportunities for similar publication series on the details of the interactive needs and workflows of data customers, data creators, and data analysts. Initial responses from stakeholder liaisons throughout the nanoinformatics community reveal a shared view that it will be critical to focus on integration of datasets with specific orientation toward the purposes for which the individual resources were created, as well as the purpose for integrating multiple resources. Early acknowledgement and undertaking of complex topics such as uncertainty, reproducibility, and interoperability is proposed as an important path to addressing key challenges within the nanomaterial community, such as reducing collateral negative impacts and decreasing the time from development to market for this new class of technologies. PMID:26425427
West facade of clubhouse. Showing first and second floor loggais ...
West facade of clubhouse. Showing first and second floor loggais - Clubhouse Verandah and citation statue in foreground: CD-E. - Hialeah Park Race Track, East Fourth Avenue, Hialeah, Miami-Dade County, FL
Guillot, Adrien; Gasmi, Imène; Brouillet, Arthur; Ait-Ahmed, Yeni; Calderaro, Julien; Ruiz, Isaac; Gao, Bin; Lotersztajn, Sophie; Pawlotsky, Jean-Michel; Lafdil, Fouad
2018-03-01
Liver progenitor cells (LPCs)/ductular reactions (DRs) are associated with inflammation and implicated in the pathogenesis of chronic liver diseases. However, how inflammation regulates LPCs/DRs remains largely unknown. Identification of inflammatory processes that involve LPC activation and expansion represent a key step in understanding the pathogenesis of liver diseases. In the current study, we found that diverse types of chronic liver diseases are associated with elevation of infiltrated interleukin (IL)-17-positive (+) cells and cytokeratin 19 (CK19) + LPCs, and both cell types colocalized and their numbers positively correlated with each other. The role of IL-17 in the induction of LPCs was examined in a mouse model fed a choline-deficient and ethionine-supplemented (CDE) diet. Feeding of wild-type mice with the CDE diet markedly elevated CK19 + Ki67 + proliferating LPCs and hepatic inflammation. Disruption of the IL-17 gene or IL-27 receptor, alpha subunit (WSX-1) gene abolished CDE diet-induced LPC expansion and inflammation. In vitro treatment with IL-17 promoted proliferation of bipotential murine oval liver cells (a liver progenitor cell line) and markedly up-regulated IL-27 expression in macrophages. Treatment with IL-27 favored the differentiation of bipotential murine oval liver cells and freshly isolated LPCs into hepatocytes. Conclusion : The current data provide evidence for a collaborative role between IL-17 and IL-27 in promoting LPC expansion and differentiation, respectively, thereby contributing to liver regeneration. ( Hepatology Communications 2018;2:329-343).
Biering-Sørensen, Fin; Alai, Sherita; Anderson, Kim; Charlifue, Susan; Chen, Yuying; DeVivo, Michael; Flanders, Adam E.; Jones, Linda; Kleitman, Naomi; Lans, Aria; Noonan, Vanessa K.; Odenkirchen, Joanne; Steeves, John; Tansey, Keith; Widerström-Noga, Eva; Jakeman, Lyn B.
2015-01-01
Objective To develop a comprehensive set of common data elements (CDEs), data definitions, case report forms and guidelines for use in spinal cord injury (SCI) clinical research, as part of the CDE project at the National Institute of Neurological Disorders and Stroke (NINDS) of the USA National Institutes of Health. Setting International Working Groups Methods Nine working groups composed of international experts reviewed existing CDEs and instruments, created new elements when needed, and provided recommendations for SCI clinical research. The project was carried out in collaboration with and cross-referenced to development of the International Spinal Cord Society (ISCoS) International SCI Data Sets. The recommendations were compiled, subjected to internal review, and posted online for external public comment. The final version was reviewed by all working groups and the NINDS CDE team prior to release. Results The NINDS SCI CDEs and supporting documents are publically available on the NINDS CDE website and the ISCoS website. The CDEs span the continuum of SCI care and the full range of domains of the International Classification of Functioning, Disability and Health. Conclusions Widespread use of common data elements can facilitate SCI clinical research and trial design, data sharing, and retrospective analyses. Continued international collaboration will enable consistent data collection and reporting, and will help ensure that the data elements are updated, reviewed and broadcast as additional evidence is obtained. PMID:25665542
2014-04-01
During last years in foreign countries there was widely introduced tactic of early activation of cardio-surgery patients. Necessary components of this methodical approach are early finishing of post-operation artificial respiration and extubation of trachea, shortening of time spending in intensive therapy till 1 day and sign out from stationary after 5 days. As a result of reducing hospitalization period, the curation costs are reduced significantly. Goal of this research was the analysis of methods of anesthesia that allow early extubation and activation after cardio-surgery interventions. There were analyzed data of protocols of anesthesia and post-operation periods for 270 patients. It was concluded that applied methods of anesthesia ensure adequate protection from operation stress and allow reduce time of post-operation artificial respiration, early activation of patients without reducing level of their safety. It was also proved that application of any type of anesthesia medicines is not influencing the temp of post-operation activation. Conducted research is proving the advisability of using tactic of early activation of patients after heart operations and considers this as a tool for optimization of cardio-surgery curation.
Classification of Chemical Compounds to Support Complex Queries in a Pathway Database
Weidemann, Andreas; Kania, Renate; Peiss, Christian; Rojas, Isabel
2004-01-01
Data quality in biological databases has become a topic of great discussion. To provide high quality data and to deal with the vast amount of biochemical data, annotators and curators need to be supported by software that carries out part of their work in an (semi-) automatic manner. The detection of errors and inconsistencies is a part that requires the knowledge of domain experts, thus in most cases it is done manually, making it very expensive and time-consuming. This paper presents two tools to partially support the curation of data on biochemical pathways. The tool enables the automatic classification of chemical compounds based on their respective SMILES strings. Such classification allows the querying and visualization of biochemical reactions at different levels of abstraction, according to the level of detail at which the reaction participants are described. Chemical compounds can be classified in a flexible manner based on different criteria. The support of the process of data curation is provided by facilitating the detection of compounds that are identified as different but that are actually the same. This is also used to identify similar reactions and, in turn, pathways. PMID:18629066
ETHNOS: A versatile electronic tool for the development and curation of national genetic databases
2010-01-01
National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation. PMID:20650823
ETHNOS : A versatile electronic tool for the development and curation of national genetic databases.
van Baal, Sjozef; Zlotogora, Joël; Lagoumintzis, George; Gkantouna, Vassiliki; Tzimas, Ioannis; Poulas, Konstantinos; Tsakalidis, Athanassios; Romeo, Giovanni; Patrinos, George P
2010-06-01
National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation.
Phylesystem: a git-based data store for community-curated phylogenetic estimates.
McTavish, Emily Jane; Hinchliff, Cody E; Allman, James F; Brown, Joseph W; Cranston, Karen A; Holder, Mark T; Rees, Jonathan A; Smith, Stephen A
2015-09-01
Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. As the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work and making errors introduced during curation easier to correct. Here, we report the development of software infrastructure to support the open curation of phylogenetic data by the community of biologists. The backend of the system provides an interface for the standard database operations of creating, reading, updating and deleting records by making commits to a git repository. The record of the history of edits to a tree is preserved by git's version control features. Hosting this data store on GitHub (http://github.com/) provides open access to the data store using tools familiar to many developers. We have deployed a server running the 'phylesystem-api', which wraps the interactions with git and GitHub. The Open Tree of Life project has also developed and deployed a JavaScript application that uses the phylesystem-api and other web services to enable input and curation of published phylogenetic statements. Source code for the web service layer is available at https://github.com/OpenTreeOfLife/phylesystem-api. The data store can be cloned from: https://github.com/OpenTreeOfLife/phylesystem. A web application that uses the phylesystem web services is deployed at http://tree.opentreeoflife.org/curator. Code for that tool is available from https://github.com/OpenTreeOfLife/opentree. mtholder@gmail.com. © The Author 2015. Published by Oxford University Press.
Aggregation Tool to Create Curated Data albums to Support Disaster Recovery and Response
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Kulkarni, A.; Maskey, M.; Li, X.; Flynn, S.
2014-12-01
Economic losses due to natural hazards are estimated to be around 6-10 billion dollars annually for the U.S. and this number keeps increasing every year. This increase has been attributed to population growth and migration to more hazard prone locations. As this trend continues, in concert with shifts in weather patterns caused by climate change, it is anticipated that losses associated with natural disasters will keep growing substantially. One of challenges disaster response and recovery analysts face is to quickly find, access and utilize a vast variety of relevant geospatial data collected by different federal agencies. More often analysts may be familiar with limited, but specific datasets and are often unaware of or unfamiliar with a large quantity of other useful resources. Finding airborne or satellite data useful to a natural disaster event often requires a time consuming search through web pages and data archives. The search process for the analyst could be made much more efficient and productive if a tool could go beyond a typical search engine and provide not just links to web sites but actual links to specific data relevant to the natural disaster, parse unstructured reports for useful information nuggets, as well as gather other related reports, summaries, news stories, and images. This presentation will describe a semantic aggregation tool developed to address similar problem for Earth Science researchers. This tool provides automated curation, and creates "Data Albums" to support case studies. The generated "Data Albums" are compiled collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; information about the event contained in news reports, and images or videos to supplement research analysis. An ontology-based relevancy-ranking algorithm drives the curation of relevant data sets for a given event. This tool is now being used to generate a catalog of case studies focusing on hurricanes and severe storms.
Feldmesser, Ester; Rosenwasser, Shilo; Vardi, Assaf; Ben-Dor, Shifra
2014-02-22
The advent of Next Generation Sequencing technologies and corresponding bioinformatics tools allows the definition of transcriptomes in non-model organisms. Non-model organisms are of great ecological and biotechnological significance, and consequently the understanding of their unique metabolic pathways is essential. Several methods that integrate de novo assembly with genome-based assembly have been proposed. Yet, there are many open challenges in defining genes, particularly where genomes are not available or incomplete. Despite the large numbers of transcriptome assemblies that have been performed, quality control of the transcript building process, particularly on the protein level, is rarely performed if ever. To test and improve the quality of the automated transcriptome reconstruction, we used manually defined and curated genes, several of them experimentally validated. Several approaches to transcript construction were utilized, based on the available data: a draft genome, high quality RNAseq reads, and ESTs. In order to maximize the contribution of the various data, we integrated methods including de novo and genome based assembly, as well as EST clustering. After each step a set of manually curated genes was used for quality assessment of the transcripts. The interplay between the automated pipeline and the quality control indicated which additional processes were required to improve the transcriptome reconstruction. We discovered that E. huxleyi has a very high percentage of non-canonical splice junctions, and relatively high rates of intron retention, which caused unique issues with the currently available tools. While individual tools missed genes and artificially joined overlapping transcripts, combining the results of several tools improved the completeness and quality considerably. The final collection, created from the integration of several quality control and improvement rounds, was compared to the manually defined set both on the DNA and protein levels, and resulted in an improvement of 20% versus any of the read-based approaches alone. To the best of our knowledge, this is the first time that an automated transcript definition is subjected to quality control using manually defined and curated genes and thereafter the process is improved. We recommend using a set of manually curated genes to troubleshoot transcriptome reconstruction.
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing challenges. The search tool automatically creates curated "Data Albums", aggregated collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non-relevant information and data.
Making Interoperability Easier with the NASA Metadata Management Tool
NASA Astrophysics Data System (ADS)
Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.
2016-12-01
ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.
miRiaD: A Text Mining Tool for Detecting Associations of microRNAs with Diseases.
Gupta, Samir; Ross, Karen E; Tudor, Catalina O; Wu, Cathy H; Schmidt, Carl J; Vijay-Shanker, K
2016-04-29
MicroRNAs are increasingly being appreciated as critical players in human diseases, and questions concerning the role of microRNAs arise in many areas of biomedical research. There are several manually curated databases of microRNA-disease associations gathered from the biomedical literature; however, it is difficult for curators of these databases to keep up with the explosion of publications in the microRNA-disease field. Moreover, automated literature mining tools that assist manual curation of microRNA-disease associations currently capture only one microRNA property (expression) in the context of one disease (cancer). Thus, there is a clear need to develop more sophisticated automated literature mining tools that capture a variety of microRNA properties and relations in the context of multiple diseases to provide researchers with fast access to the most recent published information and to streamline and accelerate manual curation. We have developed miRiaD (microRNAs in association with Disease), a text-mining tool that automatically extracts associations between microRNAs and diseases from the literature. These associations are often not directly linked, and the intermediate relations are often highly informative for the biomedical researcher. Thus, miRiaD extracts the miR-disease pairs together with an explanation for their association. We also developed a procedure that assigns scores to sentences, marking their informativeness, based on the microRNA-disease relation observed within the sentence. miRiaD was applied to the entire Medline corpus, identifying 8301 PMIDs with miR-disease associations. These abstracts and the miR-disease associations are available for browsing at http://biotm.cis.udel.edu/miRiaD . We evaluated the recall and precision of miRiaD with respect to information of high interest to public microRNA-disease database curators (expression and target gene associations), obtaining a recall of 88.46-90.78. When we expanded the evaluation to include sentences with a wide range of microRNA-disease information that may be of interest to biomedical researchers, miRiaD also performed very well with a F-score of 89.4. The informativeness ranking of sentences was evaluated in terms of nDCG (0.977) and correlation metrics (0.678-0.727) when compared to an annotator's ranked list. miRiaD, a high performance system that can capture a wide variety of microRNA-disease related information, extends beyond the scope of existing microRNA-disease resources. It can be incorporated into manual curation pipelines and serve as a resource for biomedical researchers interested in the role of microRNAs in disease. In our ongoing work we are developing an improved miRiaD web interface that will facilitate complex queries about microRNA-disease relationships, such as "In what diseases does microRNA regulation of apoptosis play a role?" or "Is there overlap in the sets of genes targeted by microRNAs in different types of dementia?"."
77 FR 59705 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... to investors in the form of a tax credit, which is expected to stimulate investment in new private capital in low income communities. Applicants must be a CDE to apply for allocation. Affected Public...
Névéol, Aurélie; Wilbur, W John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/
Névéol, Aurélie; Wilbur, W. John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/ PMID:22685160
Curating and Preserving the Big Canopy Database System: an Active Curation Approach using SEAD
NASA Astrophysics Data System (ADS)
Myers, J.; Cushing, J. B.; Lynn, P.; Weiner, N.; Ovchinnikova, A.; Nadkarni, N.; McIntosh, A.
2015-12-01
Modern research is increasingly dependent upon highly heterogeneous data and on the associated cyberinfrastructure developed to organize, analyze, and visualize that data. However, due to the complexity and custom nature of such combined data-software systems, it can be very challenging to curate and preserve them for the long term at reasonable cost and in a way that retains their scientific value. In this presentation, we describe how this challenge was met in preserving the Big Canopy Database (CanopyDB) system using an agile approach and leveraging the Sustainable Environment - Actionable Data (SEAD) DataNet project's hosted data services. The CanopyDB system was developed over more than a decade at Evergreen State College to address the needs of forest canopy researchers. It is an early yet sophisticated exemplar of the type of system that has become common in biological research and science in general, including multiple relational databases for different experiments, a custom database generation tool used to create them, an image repository, and desktop and web tools to access, analyze, and visualize this data. SEAD provides secure project spaces with a semantic content abstraction (typed content with arbitrary RDF metadata statements and relationships to other content), combined with a standards-based curation and publication pipeline resulting in packaged research objects with Digital Object Identifiers. Using SEAD, our cross-project team was able to incrementally ingest CanopyDB components (images, datasets, software source code, documentation, executables, and virtualized services) and to iteratively define and extend the metadata and relationships needed to document them. We believe that both the process, and the richness of the resultant standards-based (OAI-ORE) preservation object, hold lessons for the development of best-practice solutions for preserving scientific data in association with the tools and services needed to derive value from it.
Whiffin, Nicola; Walsh, Roddy; Govind, Risha; Edwards, Matthew; Ahmad, Mian; Zhang, Xiaolei; Tayal, Upasana; Buchan, Rachel; Midwinter, William; Wilk, Alicja E; Najgebauer, Hanna; Francis, Catherine; Wilkinson, Sam; Monk, Thomas; Brett, Laura; O'Regan, Declan P; Prasad, Sanjay K; Morris-Rosendahl, Deborah J; Barton, Paul J R; Edwards, Elizabeth; Ware, James S; Cook, Stuart A
2018-01-25
PurposeInternationally adopted variant interpretation guidelines from the American College of Medical Genetics and Genomics (ACMG) are generic and require disease-specific refinement. Here we developed CardioClassifier (http://www.cardioclassifier.org), a semiautomated decision-support tool for inherited cardiac conditions (ICCs).MethodsCardioClassifier integrates data retrieved from multiple sources with user-input case-specific information, through an interactive interface, to support variant interpretation. Combining disease- and gene-specific knowledge with variant observations in large cohorts of cases and controls, we refined 14 computational ACMG criteria and created three ICC-specific rules.ResultsWe benchmarked CardioClassifier on 57 expertly curated variants and show full retrieval of all computational data, concordantly activating 87.3% of rules. A generic annotation tool identified fewer than half as many clinically actionable variants (64/219 vs. 156/219, Fisher's P = 1.1 × 10 -18 ), with important false positives, illustrating the critical importance of disease and gene-specific annotations. CardioClassifier identified putatively disease-causing variants in 33.7% of 327 cardiomyopathy cases, comparable with leading ICC laboratories. Through addition of manually curated data, variants found in over 40% of cardiomyopathy cases are fully annotated, without requiring additional user-input data.ConclusionCardioClassifier is an ICC-specific decision-support tool that integrates expertly curated computational annotations with case-specific data to generate fast, reproducible, and interactive variant pathogenicity reports, according to best practice guidelines.GENETICS in MEDICINE advance online publication, 25 January 2018; doi:10.1038/gim.2017.258.
A Relevancy Algorithm for Curating Earth Science Data Around Phenomenon
NASA Technical Reports Server (NTRS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-01-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earthscience metadata records. Second, the methodology has been implemented as a standalone web service that is utilized to augment search and usability of data in a variety of tools.
A relevancy algorithm for curating earth science data around phenomenon
NASA Astrophysics Data System (ADS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-09-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earth science metadata records. Second, the methodology has been implemented as a stand-alone web service that is utilized to augment search and usability of data in a variety of tools.
WormBase 2014: new views of curated biology
Harris, Todd W.; Baran, Joachim; Bieri, Tamberlyn; Cabunoc, Abigail; Chan, Juancarlos; Chen, Wen J.; Davis, Paul; Done, James; Grove, Christian; Howe, Kevin; Kishore, Ranjana; Lee, Raymond; Li, Yuling; Muller, Hans-Michael; Nakamura, Cecilia; Ozersky, Philip; Paulini, Michael; Raciti, Daniela; Schindelman, Gary; Tuli, Mary Ann; Auken, Kimberly Van; Wang, Daniel; Wang, Xiaodong; Williams, Gary; Wong, J. D.; Yook, Karen; Schedl, Tim; Hodgkin, Jonathan; Berriman, Matthew; Kersey, Paul; Spieth, John; Stein, Lincoln; Sternberg, Paul W.
2014-01-01
WormBase (http://www.wormbase.org/) is a highly curated resource dedicated to supporting research using the model organism Caenorhabditis elegans. With an electronic history predating the World Wide Web, WormBase contains information ranging from the sequence and phenotype of individual alleles to genome-wide studies generated using next-generation sequencing technologies. In recent years, we have expanded the contents to include data on additional nematodes of agricultural and medical significance, bringing the knowledge of C. elegans to bear on these systems and providing support for underserved research communities. Manual curation of the primary literature remains a central focus of the WormBase project, providing users with reliable, up-to-date and highly cross-linked information. In this update, we describe efforts to organize the original atomized and highly contextualized curated data into integrated syntheses of discrete biological topics. Next, we discuss our experiences coping with the vast increase in available genome sequences made possible through next-generation sequencing platforms. Finally, we describe some of the features and tools of the new WormBase Web site that help users better find and explore data of interest. PMID:24194605
NASA Astrophysics Data System (ADS)
Hedstrom, M. L.; Kumar, P.; Myers, J.; Plale, B. A.
2012-12-01
In data science, the most common sequence of steps for data curation are to 1) curate data, 2) enable data discovery, and 3) provide for data reuse. The Sustainable Environments - Actionable Data (SEAD) project, funded through NSF's DataNet program, is creating an environment for sustainability scientists to discover data first, reuse data next, and curate data though an on-going process that we call Active and Social Curation. For active curation we are developing tools and services that support data discovery, data management, and data enhancement for the community while the data is still being used actively for research. We are creating an Active Content Repository, using drop box, semantic web technologies, and a Flickr-like interface for researchers to "drop" data into a repository where it will be replicated and minimally discoverable. For social curation, we are deploying a social networking tool, VIVO, which will allow researchers to discover data-publications-people (e.g. expertise) through a route that can start at any of those entry points. The other dimension of social curation is developing mechanisms to open data for community input, for example, using ranking and commenting mechanisms for data sets and a community-sourcing capability to add tags, clean up and validate data sets. SEAD's strategies and services are aimed at the sustainability science community, which faces numerous challenges including discovery of useful data, cleaning noisy observational data, synthesizing data of different types, defining appropriate models, managing and preserving their research data, and conveying holistic results to colleagues, students, decision makers, and the public. Sustainability researchers make significant use of centrally managed data from satellites and national sensor networks, national scientific and statistical agencies, and data archives. At the same time, locally collected data and custom derived data products that combine observations and measurements from local, national, and global sources are critical resources that have disproportionately high value relative to their size. Sustainability science includes a diverse and growing community of domain scientists, policy makers, private sector investors, green manufacturers, citizen scientists, and informed consumers. These communities need actionable data in order to assess the impacts of alternate scenarios, evaluate the cost-benefit tradeoffs of different solutions, and defend their recommendations and decisions. SEAD's goal is to extend its services to other communities in the "long tail" that may benefit from new approaches to infrastructure development which take into account the social and economic characteristics of diverse and dispersed data producers and consumers. For example, one barrier to data reuse is the difficulty of discovering data that might be valuable for a particular study, model, or decision. Making data minimally discoverable saves the community time expended on futile searches and creates a market, of sorts, for the data. Creating very low barriers to entry to a network where data can be discovered and acted upon vastly reduces this disincentive to sharing data. SEAD's approach allows communities to make small incremental improvements in data curation based on their own priorities and needs.
Altermann, Eric; Lu, Jingli; McCulloch, Alan
2017-01-01
Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use. PMID:28386247
Altermann, Eric; Lu, Jingli; McCulloch, Alan
2017-01-01
Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use.
Cloning, Sequencing, and Characterization of the SdeAB Multidrug Efflux Pump of Serratia marcescens
Kumar, Ayush; Worobec, Elizabeth A.
2005-01-01
Serratia marcescens is an important nosocomial agent known for causing various infections in immunocompromised individuals. Resistance of this organism to a broad spectrum of antibiotics makes the treatment of infections very difficult. This study was undertaken to identify multidrug resistance efflux pumps in S. marcescens. Three mutant strains of S. marcescens were isolated in vitro by the serial passaging of a wild-type strain in culture medium supplemented with ciprofloxacin, norfloxacin, or ofloxacin. Fluoroquinolone accumulation assays were performed to detect the presence of a proton gradient-dependent efflux mechanism. Two of the mutant strains were found to be effluxing norfloxacin, ciprofloxacin, and ofloxacin, while the third was found to efflux only ofloxacin. A genomic library of S. marcescens wild-type strain UOC-67 was constructed and screened for RND pump-encoding genes by using DNA probes for two putative RND pump-encoding genes. Two different loci were identified: sdeAB, encoding an MFP and an RND pump, and sdeCDE, encoding an MFP and two different RND pumps. Northern blot analysis revealed overexpression of sdeB in two mutant strains effluxing fluoroquinolones. Analysis of the sdeAB and sdeCDE loci in Escherichia coli strain AG102MB, deficient in the RND pump (AcrB), revealed that gene products of sdeAB are responsible for the efflux of a diverse range of substrates that includes ciprofloxacin, norfloxacin, ofloxacin, chloramphenicol, sodium dodecyl sulfate, ethidium bromide, and n-hexane, while those of sdeCDE did not result in any change in susceptibilities to any of these agents. PMID:15793131
Cloning, sequencing, and characterization of the SdeAB multidrug efflux pump of Serratia marcescens.
Kumar, Ayush; Worobec, Elizabeth A
2005-04-01
Serratia marcescens is an important nosocomial agent known for causing various infections in immunocompromised individuals. Resistance of this organism to a broad spectrum of antibiotics makes the treatment of infections very difficult. This study was undertaken to identify multidrug resistance efflux pumps in S. marcescens. Three mutant strains of S. marcescens were isolated in vitro by the serial passaging of a wild-type strain in culture medium supplemented with ciprofloxacin, norfloxacin, or ofloxacin. Fluoroquinolone accumulation assays were performed to detect the presence of a proton gradient-dependent efflux mechanism. Two of the mutant strains were found to be effluxing norfloxacin, ciprofloxacin, and ofloxacin, while the third was found to efflux only ofloxacin. A genomic library of S. marcescens wild-type strain UOC-67 was constructed and screened for RND pump-encoding genes by using DNA probes for two putative RND pump-encoding genes. Two different loci were identified: sdeAB, encoding an MFP and an RND pump, and sdeCDE, encoding an MFP and two different RND pumps. Northern blot analysis revealed overexpression of sdeB in two mutant strains effluxing fluoroquinolones. Analysis of the sdeAB and sdeCDE loci in Escherichia coli strain AG102MB, deficient in the RND pump (AcrB), revealed that gene products of sdeAB are responsible for the efflux of a diverse range of substrates that includes ciprofloxacin, norfloxacin, ofloxacin, chloramphenicol, sodium dodecyl sulfate, ethidium bromide, and n-hexane, while those of sdeCDE did not result in any change in susceptibilities to any of these agents.
Astromaterials Acquisition and Curation Office (KT) Overview
NASA Technical Reports Server (NTRS)
Allen, Carlton
2014-01-01
The Astromaterials Acquisition and Curation Office has the unique responsibility to curate NASA's extraterrestrial samples - from past and forthcoming missions - into the indefinite future. Currently, curation includes documentation, preservation, physical security, preparation, and distribution of samples from the Moon, asteroids, comets, the solar wind, and the planet Mars. Each of these sample sets has a unique history and comes from a unique environment. The curation laboratories and procedures developed over 40 years have proven both necessary and sufficient to serve the evolving needs of a worldwide research community. A new generation of sample return missions to destinations across the solar system is being planned and proposed. The curators are developing the tools and techniques to meet the challenges of these new samples. Extraterrestrial samples pose unique curation requirements. These samples were formed and exist under conditions strikingly different from those on the Earth's surface. Terrestrial contamination would destroy much of the scientific significance of extraterrestrial materials. To preserve the research value of these precious samples, contamination must be minimized, understood, and documented. In addition, the samples must be preserved - as far as possible - from physical and chemical alteration. The elaborate curation facilities at JSC were designed and constructed, and have been operated for many years, to keep sample contamination and alteration to a minimum. Currently, JSC curates seven collections of extraterrestrial samples: (a)) Lunar rocks and soils collected by the Apollo astronauts, (b) Meteorites collected on dedicated expeditions to Antarctica, (c) Cosmic dust collected by high-altitude NASA aircraft,t (d) Solar wind atoms collected by the Genesis spacecraft, (e) Comet particles collected by the Stardust spacecraft, (f) Interstellar dust particles collected by the Stardust spacecraft, and (g) Asteroid soil particles collected by the Japan Aerospace Exploration Agency (JAXA) Hayabusa spacecraft Each of these sample sets has a unique history and comes from a unique environment. We have developed specialized laboratories and practices over many years to preserve and protect the samples, not only for current research but for studies that may be carried out in the indefinite future.
Patel, Yoshita; Bahlhorn, Hannah; Zafar, Saniya; Zwetchkenbaum, Samuel; Eisbruch, Avraham; Murdoch-Kinch, Carol Anne
2012-07-01
Oral complications of radiation therapy for head and neck cancer (HNC) are associated with a significant decline in oral health-related quality of life (OHQOL). The dentist, working with the radiation oncologist and the rest of the health care team, plays an important role in the prevention and management of these complications, but patients do not always receive care consistent with current guidelines. This study investigated barriers to recommended care. There is variability in knowledge and practice among dentists and radiation oncologists regarding the dental management of patients treated with head and neck radiotherapy (HNRT), and inadequate communication and collaboration between members of the patient's health care team contribute to inconsistencies in application of clinical care guidelines. There is on interest and need for continuing dental (CDE) and medical education (CME) on this topic. A questionnaire was developed to assess dentists' knowledge and practice of dental management of HNC patients and their interest in CDE on this topic. All members of the Michigan Dental Association (MDA) with email addresses were asked to complete the survey online, and a random sample of MDA members without email addresses was invited to complete a paper version of the same survey. All Michigan members of the American Society for Radiation Oncology (ASTRO) were invited to complete an online version of the survey modified for radiation oncologists. The response rate for dentists was 47.9% and radiation oncologists was 22.3%. Of the dentists who responded, 81% reported that a major barrier to providing dental treatment before radiotherapy was a lack of time between initial dental consultation and the start of radiation; inadequate communication between health care providers was blamed most frequently for this. Ten percent of the dentists and 25% of the radiation oncologists reported that they did not treat HNC patients because they lacked adequate training, and 55% of dental respondents said that they did not feel adequately trained in dental school to treat patients who have had head and neck radiation therapy. Most respondents (radiation oncologists 67%; dentists 72%) were interested in CDE and CME on this topic. These results suggest a need for CDE and CME for Michigan dentists and radiation oncologists on the oral management of HNC patients. Improved training and communication between health professionals could improve patient outcomes and more consistent application of clinical care guidelines.
ERIC Educational Resources Information Center
ExpandED Schools, 2014
2014-01-01
This guide is a list of tools that can be used in continued implementation of strong programming powered by Social and Emotional Learning (SEL) competencies. This curated resource pulls from across the landscape of policy, research and practice, with a description of each tool gathered directly from its website.
Wang, Shur-Jen; Laulederkind, Stanley J F; Hayman, G Thomas; Petri, Victoria; Smith, Jennifer R; Tutaj, Marek; Nigam, Rajni; Dwinell, Melinda R; Shimoyama, Mary
2016-08-01
Cardiovascular diseases are complex diseases caused by a combination of genetic and environmental factors. To facilitate progress in complex disease research, the Rat Genome Database (RGD) provides the community with a disease portal where genome objects and biological data related to cardiovascular diseases are systematically organized. The purpose of this study is to present biocuration at RGD, including disease, genetic, and pathway data. The RGD curation team uses controlled vocabularies/ontologies to organize data curated from the published literature or imported from disease and pathway databases. These organized annotations are associated with genes, strains, and quantitative trait loci (QTLs), thus linking functional annotations to genome objects. Screen shots from the web pages are used to demonstrate the organization of annotations at RGD. The human cardiovascular disease genes identified by annotations were grouped according to data sources and their annotation profiles were compared by in-house tools and other enrichment tools available to the public. The analysis results show that the imported cardiovascular disease genes from ClinVar and OMIM are functionally different from the RGD manually curated genes in terms of pathway and Gene Ontology annotations. The inclusion of disease genes from other databases enriches the collection of disease genes not only in quantity but also in quality. Copyright © 2016 the American Physiological Society.
AtomPy: an open atomic-data curation environment
NASA Astrophysics Data System (ADS)
Bautista, Manuel; Mendoza, Claudio; Boswell, Josiah S; Ajoku, Chukwuemeka
2014-06-01
We present a cloud-computing environment for atomic data curation, networking among atomic data providers and users, teaching-and-learning, and interfacing with spectral modeling software. The system is based on Google-Drive Sheets, Pandas (Python Data Analysis Library) DataFrames, and IPython Notebooks for open community-driven curation of atomic data for scientific and technological applications. The atomic model for each ionic species is contained in a multi-sheet Google-Drive workbook, where the atomic parameters from all known public sources are progressively stored. Metadata (provenance, community discussion, etc.) accompanying every entry in the database are stored through Notebooks. Education tools on the physics of atomic processes as well as their relevance to plasma and spectral modeling are based on IPython Notebooks that integrate written material, images, videos, and active computer-tool workflows. Data processing workflows and collaborative software developments are encouraged and managed through the GitHub social network. Relevant issues this platform intends to address are: (i) data quality by allowing open access to both data producers and users in order to attain completeness, accuracy, consistency, provenance and currentness; (ii) comparisons of different datasets to facilitate accuracy assessment; (iii) downloading to local data structures (i.e. Pandas DataFrames) for further manipulation and analysis by prospective users; and (iv) data preservation by avoiding the discard of outdated sets.
Data Curation for the Exploitation of Large Earth Observation Products Databases - The MEA system
NASA Astrophysics Data System (ADS)
Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Cavicchi, Mario; Della Vecchia, Andrea
2014-05-01
National Space Agencies under the umbrella of the European Space Agency are performing a strong activity to handle and provide solutions to Big Data and related knowledge (metadata, software tools and services) management and exploitation. The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives, the incoming satellite observation platforms that will generate an impressive amount of new data and the new EU approach on the data distribution policy make necessary to address technologies for the long-term management of these data sets, including their consolidation, preservation, distribution, continuation and curation across multiple missions. The management of long EO data time series of continuing or historic missions - with more than 20 years of data available already today - requires technical solutions and technologies which differ considerably from the ones exploited by existing systems. Several tools, both open source and commercial, are already providing technologies to handle data and metadata preparation, access and visualization via OGC standard interfaces. This study aims at describing the Multi-sensor Evolution Analysis (MEA) system and the Data Curation concept as approached and implemented within the ASIM and EarthServer projects, funded by the European Space Agency and the European Commission, respectively.
BioSurfDB: knowledge and algorithms to support biosurfactants and biodegradation studies
Oliveira, Jorge S.; Araújo, Wydemberg; Lopes Sales, Ana Isabela; de Brito Guerra, Alaine; da Silva Araújo, Sinara Carla; de Vasconcelos, Ana Tereza Ribeiro; Agnez-Lima, Lucymara F.; Freitas, Ana Teresa
2015-01-01
Crude oil extraction, transportation and use provoke the contamination of countless ecosystems. Therefore, bioremediation through surfactants mobilization or biodegradation is an important subject, both economically and environmentally. Bioremediation research had a great boost with the recent advances in Metagenomics, as it enabled the sequencing of uncultured microorganisms providing new insights on surfactant-producing and/or oil-degrading bacteria. Many research studies are making available genomic data from unknown organisms obtained from metagenomics analysis of oil-contaminated environmental samples. These new datasets are presently demanding the development of new tools and data repositories tailored for the biological analysis in a context of bioremediation data analysis. This work presents BioSurfDB, www.biosurfdb.org, a curated relational information system integrating data from: (i) metagenomes; (ii) organisms; (iii) biodegradation relevant genes; proteins and their metabolic pathways; (iv) bioremediation experiments results, with specific pollutants treatment efficiencies by surfactant producing organisms; and (v) a biosurfactant-curated list, grouped by producing organism, surfactant name, class and reference. The main goal of this repository is to gather information on the characterization of biological compounds and mechanisms involved in biosurfactant production and/or biodegradation and make it available in a curated way and associated with a number of computational tools to support studies of genomic and metagenomic data. Database URL: www.biosurfdb.org PMID:25833955
Saccharomyces genome database informs human biology
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Karra, Kalpana; Binkley, Gail; Simison, Matt; Miyasato, Stuart R
2018-01-01
Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and to provide this wealth of information to advance, in many ways, research on other organisms, even those as evolutionarily distant as humans. To build such a bridge between biological kingdoms, SGD is curating data regarding yeast-human complementation, in which a human gene can successfully replace the function of a yeast gene, and/or vice versa. These data are manually curated from published literature, made available for download, and incorporated into a variety of analysis tools provided by SGD. PMID:29140510
TREATING HEMOGLOBINOPATHIES USING GENE CORRECTION APPROACHES: PROMISES AND CHALLENGES
Cottle, Renee N.; Lee, Ciaran M.; Bao, Gang
2016-01-01
Hemoglobinopathies are genetic disorders caused by aberrant hemoglobin expression or structure changes, resulting in severe mortality and health disparities worldwide. Sickle cell disease (SCD) and β-thalassemia, the most common forms of hemoglobinopathies, are typically treated using transfusions and pharmacological agents. Allogeneic hematopoietic stem cell transplantation is the only curative therapy, but has limited clinical applicability. Although gene therapy approaches have been proposed based on the insertion and forced expression of wild-type or anti-sickling β-globin variants, safety concerns may impede their clinical application. A novel curative approach is nuclease-based gene correction, which involves the application of precision genome editing tools to correct the disease-causing mutation. This review describes the development and potential application of gene therapy and precision genome editing approaches for treating SCD and β-thalassemia. The opportunities and challenges in advancing a curative therapy for hemoglobinopathies are also discussed. PMID:27314256
ECOTOX knowledgebase: New tools for data visualization and database interoperability
The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, curated database that summarizes toxicology data fromsingle chemical exposure studies to terrestrial and aquatic organisms. The ECOTOX Knowledgebase provides risk assessors and researchers consistent information on toxi...
Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E.
2017-01-01
The pathogen–host interactions database (PHI-base) is available at www.phi-base.org. PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen–host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. PMID:27915230
IMG ER: a system for microbial genome annotation expert review and curation.
Markowitz, Victor M; Mavromatis, Konstantinos; Ivanova, Natalia N; Chen, I-Min A; Chu, Ken; Kyrpides, Nikos C
2009-09-01
A rapidly increasing number of microbial genomes are sequenced by organizations worldwide and are eventually included into various public genome data resources. The quality of the annotations depends largely on the original dataset providers, with erroneous or incomplete annotations often carried over into the public resources and difficult to correct. We have developed an Expert Review (ER) version of the Integrated Microbial Genomes (IMG) system, with the goal of supporting systematic and efficient revision of microbial genome annotations. IMG ER provides tools for the review and curation of annotations of both new and publicly available microbial genomes within IMG's rich integrated genome framework. New genome datasets are included into IMG ER prior to their public release either with their native annotations or with annotations generated by IMG ER's annotation pipeline. IMG ER tools allow addressing annotation problems detected with IMG's comparative analysis tools, such as genes missed by gene prediction pipelines or genes without an associated function. Over the past year, IMG ER was used for improving the annotations of about 150 microbial genomes.
ECOTOX Knowledgebase: New tools for data visualization and database interoperability -Poster
The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, curated database that summarizes toxicology data from single chemical exposure studies to terrestrial and aquatic organisms. The ECOTOX Knowledgebase provides risk assessors and researchers consistent information on tox...
ECOTOX Knowledgebase: New tools for data visualization and database interoperability (poster)
The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, curated database that summarizes toxicology data from single chemical exposure studies to terrestrial and aquatic organisms. The ECOTOX Knowledgebase provides risk assessors and researchers consistent information on tox...
Plant Reactome: a resource for plant pathways and comparative analysis
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D.; Wu, Guanming; Fabregat, Antonio; Elser, Justin L.; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D.; Ware, Doreen; Jaiswal, Pankaj
2017-01-01
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. PMID:27799469
Guidelines for the functional annotation of microRNAs using the Gene Ontology
D'Eustachio, Peter; Smith, Jennifer R.; Zampetaki, Anna
2016-01-01
MicroRNA regulation of developmental and cellular processes is a relatively new field of study, and the available research data have not been organized to enable its inclusion in pathway and network analysis tools. The association of gene products with terms from the Gene Ontology is an effective method to analyze functional data, but until recently there has been no substantial effort dedicated to applying Gene Ontology terms to microRNAs. Consequently, when performing functional analysis of microRNA data sets, researchers have had to rely instead on the functional annotations associated with the genes encoding microRNA targets. In consultation with experts in the field of microRNA research, we have created comprehensive recommendations for the Gene Ontology curation of microRNAs. This curation manual will enable provision of a high-quality, reliable set of functional annotations for the advancement of microRNA research. Here we describe the key aspects of the work, including development of the Gene Ontology to represent this data, standards for describing the data, and guidelines to support curators making these annotations. The full microRNA curation guidelines are available on the GO Consortium wiki (http://wiki.geneontology.org/index.php/MicroRNA_GO_annotation_manual). PMID:26917558
ERIC Educational Resources Information Center
Eklund, Brandon; Prat-Resina, Xavier
2014-01-01
ChemEd X Data is an open web tool that collects and curates physical and chemical data of hundreds of substances. This tool allows students to navigate, select, and graphically represent data such as boiling and melting points, enthalpies of combustion, and heat capacities for hundreds of molecules. By doing so, students can independently identify…
ABO, Rhesus, and Kell Antigens, Alleles, and Haplotypes in West Bengal, India
Basu, Debapriya; Datta, Suvro Sankha; Montemayor, Celina; Bhattacharya, Prasun; Mukherjee, Krishnendu; Flegel, Willy A.
2018-01-01
Background Few studies have documented the blood group antigens in the population of eastern India. Frequencies of some common alleles and haplotypes were unknown. We describe phenotype, allele, and haplotype frequencies in the state of West Bengal, India. Methods We tested 1,528 blood donors at the Medical College Hospital, Kolkata. The common antigens of the ABO, Rhesus, and Kell blood group systems were determined by standard serologic methods in tubes. Allele and haplotype frequencies were calculated with an iterative method that yielded maximum-likelihood estimates under the assumption of a Hardy-Weinberg equilibrium. Results The prevalence of ABO antigens were B (34%), O (32%), A (25%), and AB (9%) with ABO allele frequencies for O = 0.567, A = 0.189, and B = 0.244. The D antigen (RH1) was observed in 96.6% of the blood donors with RH haplotype frequencies, such as for CDe = 0.688809, cde = 0.16983 and CdE = 0.000654. The K antigen (K1) was observed in 12 donors (0.79%) with KEL allele frequencies for K = 0.004 and k = 0.996. Conclusions: For the Bengali population living in the south of West Bengal, we established the frequencies of the major clinically relevant antigens in the ABO, Rhesus, and Kell blood group systems and derived estimates for the underlying ABO and KEL alleles and RH haplotypes. Such blood donor screening will improve the availability of compatible red cell units for transfusion. Our approach using widely available routine methods can readily be applied in other regions, where the sufficient supply of blood typed for the Rh and K antigens is lacking. PMID:29593462
Effect of lineage-specific metabolic traits of Lactobacillus reuteri on sourdough microbial ecology.
Lin, Xiaoxi B; Gänzle, Michael G
2014-09-01
This study determined the effects of specific metabolic traits of Lactobacillus reuteri on its competitiveness in sourdoughs. The competitiveness of lactobacilli in sourdough generally depends on their growth rate; acid resistance additionally contributes to competitiveness in sourdoughs with long fermentation times. Glycerol metabolism via glycerol dehydratase (gupCDE) accelerates growth by the regeneration of reduced cofactors; glutamate metabolism via glutamate decarboxylase (gadB) increases acid resistance by generating a proton motive force. Glycerol and glutamate metabolisms are lineage-specific traits in L. reuteri; therefore, this study employed glycerol dehydratase-positive sourdough isolates of human-adapted L. reuteri lineage I, glutamate decarboxylase-positive strains of rodent-adapted L. reuteri lineage II, as well as mutants with deletions in gadB or gupCDE. The competitivenesses of the strains were quantified by inoculation of wheat and sorghum sourdoughs with defined strains, followed by propagation of doughs with a 10% inoculum and 12-h or 72-h fermentation cycles. Lineage I L. reuteri strains dominated sourdoughs propagated with 12-h fermentation cycles; lineage II L. reuteri strains dominated sourdoughs propagated with 72-h fermentation cycles. L. reuteri 100-23ΔgadB was outcompeted by its wild-type strain in sourdoughs fermented with 72-h fermentation cycles; L. reuteri FUA3400ΔgupCDE was outcompeted by its wild-type strain in sourdoughs fermented with both 12-h and 72-h fermentation cycles. Competition experiments with isogenic pairs of strains resulted in a constant rate of strain displacement of the less competitive mutant strain. In conclusion, lineage-specific traits of L. reuteri determine the competitiveness of this species in sourdough fermentations. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Pore-scale dynamics of salt transport and distribution in drying porous media
NASA Astrophysics Data System (ADS)
Shokri, Nima
2014-01-01
Understanding the physics of water evaporation from saline porous media is important in many natural and engineering applications such as durability of building materials and preservation of monuments, water quality, and mineral-fluid interactions. We applied synchrotron x-ray micro-tomography to investigate the pore-scale dynamics of dissolved salt distribution in a three dimensional drying saline porous media using a cylindrical plastic column (15 mm in height and 8 mm in diameter) packed with sand particles saturated with CaI2 solution (5% concentration by mass) with a spatial and temporal resolution of 12 μm and 30 min, respectively. Every time the drying sand column was set to be imaged, two different images were recorded using distinct synchrotron x-rays energies immediately above and below the K-edge value of Iodine. Taking the difference between pixel gray values enabled us to delineate the spatial and temporal distribution of CaI2 concentration at pore scale. Results indicate that during early stages of evaporation, air preferentially invades large pores at the surface while finer pores remain saturated and connected to the wet zone at bottom via capillary-induced liquid flow acting as evaporating spots. Consequently, the salt concentration increases preferentially in finer pores where evaporation occurs. Higher salt concentration was observed close to the evaporating surface indicating a convection-driven process. The obtained salt profiles were used to evaluate the numerical solution of the convection-diffusion equation (CDE). Results show that the macro-scale CDE could capture the overall trend of the measured salt profiles but fail to produce the exact slope of the profiles. Our results shed new insight on the physics of salt transport and its complex dynamics in drying porous media and establish synchrotron x-ray tomography as an effective tool to investigate the dynamics of salt transport in porous media at high spatial and temporal resolution.
Pore-scale dynamics of salt transport and distribution in drying porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shokri, Nima, E-mail: nima.shokri@manchester.ac.uk
2014-01-15
Understanding the physics of water evaporation from saline porous media is important in many natural and engineering applications such as durability of building materials and preservation of monuments, water quality, and mineral-fluid interactions. We applied synchrotron x-ray micro-tomography to investigate the pore-scale dynamics of dissolved salt distribution in a three dimensional drying saline porous media using a cylindrical plastic column (15 mm in height and 8 mm in diameter) packed with sand particles saturated with CaI{sub 2} solution (5% concentration by mass) with a spatial and temporal resolution of 12 μm and 30 min, respectively. Every time the drying sandmore » column was set to be imaged, two different images were recorded using distinct synchrotron x-rays energies immediately above and below the K-edge value of Iodine. Taking the difference between pixel gray values enabled us to delineate the spatial and temporal distribution of CaI{sub 2} concentration at pore scale. Results indicate that during early stages of evaporation, air preferentially invades large pores at the surface while finer pores remain saturated and connected to the wet zone at bottom via capillary-induced liquid flow acting as evaporating spots. Consequently, the salt concentration increases preferentially in finer pores where evaporation occurs. Higher salt concentration was observed close to the evaporating surface indicating a convection-driven process. The obtained salt profiles were used to evaluate the numerical solution of the convection-diffusion equation (CDE). Results show that the macro-scale CDE could capture the overall trend of the measured salt profiles but fail to produce the exact slope of the profiles. Our results shed new insight on the physics of salt transport and its complex dynamics in drying porous media and establish synchrotron x-ray tomography as an effective tool to investigate the dynamics of salt transport in porous media at high spatial and temporal resolution.« less
Apollo Lunar Sample Integration into Google Moon: A New Approach to Digitization
NASA Technical Reports Server (NTRS)
Dawson, Melissa D.; Todd, nancy S.; Lofgren, Gary E.
2011-01-01
The Google Moon Apollo Lunar Sample Data Integration project is part of a larger, LASER-funded 4-year lunar rock photo restoration project by NASA s Acquisition and Curation Office [1]. The objective of this project is to enhance the Apollo mission data already available on Google Moon with information about the lunar samples collected during the Apollo missions. To this end, we have combined rock sample data from various sources, including Curation databases, mission documentation and lunar sample catalogs, with newly available digital photography of rock samples to create a user-friendly, interactive tool for learning about the Apollo Moon samples
Lunar Processing Cabinet 2.0: Retrofitting Gloveboxes into the 21st Century
NASA Technical Reports Server (NTRS)
Calaway, M. J.
2015-01-01
In 2014, the Apollo 16 Lunar Processing Glovebox (cabinet 38) in the Lunar Curation Laboratory at NASA JSC received an upgrade including new technology interfaces. A Jacobs - Technology Innovation Project provided the primary resources to retrofit this glovebox into the 21st century. NASA Astromaterials Acquisition & Curation Office continues the over 40 year heritage of preserving lunar materials for future scientific studies in state-of-the-art facilities. This enhancement has not only modernized the contamination controls, but provides new innovative tools for processing and characterizing lunar samples as well as supports real-time exchange of sample images and information with the scientific community throughout the world.
ERIC Educational Resources Information Center
Haga, Enoch J.
1971-01-01
The Certificate in Data Education (Basic) examination is designed to certify that successful candidates are academically proficient in those principles and concepts of automation, computing, and data processing (including social and user implications) which are usually taught in basic introductory courses at the college or university level. (CK)
NASA Technical Reports Server (NTRS)
Wang, Dunyou
2003-01-01
A time-dependent wave-packet approach is presented for the quantum dynamics study of the AB+CDE reaction system for zero total angular momentum. A seven-degree-of-freedom calculation is employed to study the chemical reaction of H2+C2H yields H + C2H2 by treating C2H as a linear molecule. Initial state selected reaction probabilities are presented for various initial ro-vibrational states. This study shows that vibrational excitation of H2 enhances the reaction probability, whereas the excitation of C2H has only a small effect on the reactivity. An integral cross section is also reported for the initial ground states of H2 and C2H. The theoretical and experimental results agree with each other very well when the calculated seven dimensional results are adjusted to account for the lower transition state barrier heights found in recent ab initio calculations.
Update on Thales flexure bearing coolers and drive electronics
NASA Astrophysics Data System (ADS)
Willems, D.; Benschop, T.; v. d. Groep, W.; Mullié, J.; v. d. Weijden, H.; Tops, M.
2009-05-01
Thales Cryogenics has a long background in delivering cryogenic coolers with an MTTF far above 20.000 hrs for military, civil and space programs. Developments in these markets required continuous update of the flexure bearing cooler portfolio for new and emerging applications. The cooling requirements of new application have not only their influence on the size of the compressor, cold finger and cooling technology used but also on the integration and control of the cooler in the application. Thales Cryogenics developed a compact Cooler Drive Electronics based on DSP technology that could be used for driving linear flexure bearing coolers with extreme temperature stability and with additional diagnostics inside the CDE. This CDE has a wide application and can be modified to specific customer requirements. During the presentation the latest developments in flexure bearing cooler technology will be presented both for Stirling and Pulse Tube coolers. Also the relation between the most important recent detector requirements and possible available solutions on cryocooler level will be presented.
NASA Technical Reports Server (NTRS)
Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.
2016-01-01
New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro- XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.
NASA Astrophysics Data System (ADS)
Blumenfeld, E. H.; Evans, C. A.; Zeigler, R. A.; Righter, K.; Beaulieu, K. R.; Oshel, E. R.; Liddle, D. A.; Hanna, R.; Ketcham, R. A.; Todd, N. S.
2016-12-01
New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro-XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.
PathNER: a tool for systematic identification of biological pathway mentions in the literature
2013-01-01
Background Biological pathways are central to many biomedical studies and are frequently discussed in the literature. Several curated databases have been established to collate the knowledge of molecular processes constituting pathways. Yet, there has been little focus on enabling systematic detection of pathway mentions in the literature. Results We developed a tool, named PathNER (Pathway Named Entity Recognition), for the systematic identification of pathway mentions in the literature. PathNER is based on soft dictionary matching and rules, with the dictionary generated from public pathway databases. The rules utilise general pathway-specific keywords, syntactic information and gene/protein mentions. Detection results from both components are merged. On a gold-standard corpus, PathNER achieved an F1-score of 84%. To illustrate its potential, we applied PathNER on a collection of articles related to Alzheimer's disease to identify associated pathways, highlighting cases that can complement an existing manually curated knowledgebase. Conclusions In contrast to existing text-mining efforts that target the automatic reconstruction of pathway details from molecular interactions mentioned in the literature, PathNER focuses on identifying specific named pathway mentions. These mentions can be used to support large-scale curation and pathway-related systems biology applications, as demonstrated in the example of Alzheimer's disease. PathNER is implemented in Java and made freely available online at http://sourceforge.net/projects/pathner/. PMID:24555844
Tools and data services registry: a community effort to document bioinformatics resources
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
2016-01-01
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
Urban, Martin; Cuzick, Alayne; Rutherford, Kim; Irvine, Alistair; Pedro, Helder; Pant, Rashmi; Sadanadan, Vidyendra; Khamari, Lokanath; Billal, Santoshkumar; Mohanty, Sagar; Hammond-Kosack, Kim E
2017-01-04
The pathogen-host interactions database (PHI-base) is available at www.phi-base.org PHI-base contains expertly curated molecular and biological information on genes proven to affect the outcome of pathogen-host interactions reported in peer reviewed research articles. In addition, literature that indicates specific gene alterations that did not affect the disease interaction phenotype are curated to provide complete datasets for comparative purposes. Viruses are not included. Here we describe a revised PHI-base Version 4 data platform with improved search, filtering and extended data display functions. A PHIB-BLAST search function is provided and a link to PHI-Canto, a tool for authors to directly curate their own published data into PHI-base. The new release of PHI-base Version 4.2 (October 2016) has an increased data content containing information from 2219 manually curated references. The data provide information on 4460 genes from 264 pathogens tested on 176 hosts in 8046 interactions. Prokaryotic and eukaryotic pathogens are represented in almost equal numbers. Host species belong ∼70% to plants and 30% to other species of medical and/or environmental importance. Additional data types included into PHI-base 4 are the direct targets of pathogen effector proteins in experimental and natural host organisms. The curation problems encountered and the future directions of the PHI-base project are briefly discussed. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Collaborative biocuration--text-mining development task for document prioritization for curation.
Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J
2012-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation is a community-wide effort for evaluating text mining and information extraction systems for the biological domain. The 'BioCreative Workshop 2012' subcommittee identified three areas, or tracks, that comprised independent, but complementary aspects of data curation in which they sought community input: literature triage (Track I); curation workflow (Track II) and text mining/natural language processing (NLP) systems (Track III). Track I participants were invited to develop tools or systems that would effectively triage and prioritize articles for curation and present results in a prototype web interface. Training and test datasets were derived from the Comparative Toxicogenomics Database (CTD; http://ctdbase.org) and consisted of manuscripts from which chemical-gene-disease data were manually curated. A total of seven groups participated in Track I. For the triage component, the effectiveness of participant systems was measured by aggregate gene, disease and chemical 'named-entity recognition' (NER) across articles; the effectiveness of 'information retrieval' (IR) was also measured based on 'mean average precision' (MAP). Top recall scores for gene, disease and chemical NER were 49, 65 and 82%, respectively; the top MAP score was 80%. Each participating group also developed a prototype web interface; these interfaces were evaluated based on functionality and ease-of-use by CTD's biocuration project manager. In this article, we present a detailed description of the challenge and a summary of the results.
NASA Astrophysics Data System (ADS)
Peng, G.; Austin, M.
2017-12-01
Identification and prioritization of targeted user community needs are not always considered until after data has been created and archived. Gaps in data curation and documentation in the data production and delivery phases limit data's broad utility specifically for decision makers. Expert understanding and knowledge of a particular dataset is often required as a part of the data and metadata curation process to establish the credibility of the data and support informed decision-making. To enhance curation practices, content from NOAA's Observing System Integrated Assessment (NOSIA) Value Tree, NOAA's Data Catalog/Digital Object Identifier (DOI) projects (collection-level metadata) have been integrated with Data/Stewardship Maturity Matrices (data and stewardship quality information) focused on assessment of user community needs. This results in user focused evidence based decision making tools created by NOAA's National Environmental Satellite, Data, and Information Service (NESDIS) through identification and assessment of data content gaps related to scientific knowledge and application to key areas of societal benefit. Through enabling user need feedback from the beginning of data creation through archive allows users to determine the quality and value of data that is fit for purpose. Data gap assessment and prioritization are presented in a user-friendly way using the data stewardship maturity matrices as measurement of data management quality. These decision maker tools encourages data producers and data providers/stewards to consider users' needs prior to data creation and dissemination resulting in user driven data requirements increasing return on investment. A use case focused on need for NOAA observations linked societal benefit will be used to demonstrate the value of these tools.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl
2016-01-01
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karcher, Sandra; Willighagen, Egon L.; Rumble, John
Many groups within the broad field of nanoinformatics are already developing data repositories and analytical tools driven by their individual organizational goals. Integrating these data resources across disciplines and with non-nanotechnology resources can support multiple objectives by enabling the reuse of the same information. Integration can also serve as the impetus for novel scientific discoveries by providing the framework to support deeper data analyses. This article discusses current data integration practices in nanoinformatics and in comparable mature fields, and nanotechnology-specific challenges impacting data integration. Based on results from a nanoinformatics-community-wide survey, recommendations for achieving integration of existing operational nanotechnology resourcesmore » are presented. Nanotechnology-specific data integration challenges, if effectively resolved, can foster the application and validation of nanotechnology within and across disciplines. This paper is one of a series of articles by the Nanomaterial Data Curation Initiative that address data issues such as data curation workflows, data completeness and quality, curator responsibilities, and metadata.« less
Saccharomyces genome database informs human biology.
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael
2018-01-04
The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and to provide this wealth of information to advance, in many ways, research on other organisms, even those as evolutionarily distant as humans. To build such a bridge between biological kingdoms, SGD is curating data regarding yeast-human complementation, in which a human gene can successfully replace the function of a yeast gene, and/or vice versa. These data are manually curated from published literature, made available for download, and incorporated into a variety of analysis tools provided by SGD. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; ...
2016-01-01
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less
75 FR 11993 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... before April 12, 2010 to be assured of consideration. Community Development Financial Institutions (CDFI... related to Community Development Entity (CDE)/New Markets Tax Credit material events, as well as Community Development Financial Institutions (CDFI) material events in a single form. The form will provide a more...
The Biomolecular Interaction Network Database and related tools 2005 update
Alfarano, C.; Andrade, C. E.; Anthony, K.; Bahroos, N.; Bajec, M.; Bantoft, K.; Betel, D.; Bobechko, B.; Boutilier, K.; Burgess, E.; Buzadzija, K.; Cavero, R.; D'Abreo, C.; Donaldson, I.; Dorairajoo, D.; Dumontier, M. J.; Dumontier, M. R.; Earles, V.; Farrall, R.; Feldman, H.; Garderman, E.; Gong, Y.; Gonzaga, R.; Grytsan, V.; Gryz, E.; Gu, V.; Haldorsen, E.; Halupa, A.; Haw, R.; Hrvojic, A.; Hurrell, L.; Isserlin, R.; Jack, F.; Juma, F.; Khan, A.; Kon, T.; Konopinsky, S.; Le, V.; Lee, E.; Ling, S.; Magidin, M.; Moniakis, J.; Montojo, J.; Moore, S.; Muskat, B.; Ng, I.; Paraiso, J. P.; Parker, B.; Pintilie, G.; Pirone, R.; Salama, J. J.; Sgro, S.; Shan, T.; Shu, Y.; Siew, J.; Skinner, D.; Snyder, K.; Stasiuk, R.; Strumpf, D.; Tuekam, B.; Tao, S.; Wang, Z.; White, M.; Willis, R.; Wolting, C.; Wong, S.; Wrong, A.; Xin, C.; Yao, R.; Yates, B.; Zhang, S.; Zheng, K.; Pawson, T.; Ouellette, B. F. F.; Hogue, C. W. V.
2005-01-01
The Biomolecular Interaction Network Database (BIND) (http://bind.ca) archives biomolecular interaction, reaction, complex and pathway information. Our aim is to curate the details about molecular interactions that arise from published experimental research and to provide this information, as well as tools to enable data analysis, freely to researchers worldwide. BIND data are curated into a comprehensive machine-readable archive of computable information and provides users with methods to discover interactions and molecular mechanisms. BIND has worked to develop new methods for visualization that amplify the underlying annotation of genes and proteins to facilitate the study of molecular interaction networks. BIND has maintained an open database policy since its inception in 1999. Data growth has proceeded at a tremendous rate, approaching over 100 000 records. New services provided include a new BIND Query and Submission interface, a Standard Object Access Protocol service and the Small Molecule Interaction Database (http://smid.blueprint.org) that allows users to determine probable small molecule binding sites of new sequences and examine conserved binding residues. PMID:15608229
Plant Reactome: a resource for plant pathways and comparative analysis.
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D; Wu, Guanming; Fabregat, Antonio; Elser, Justin L; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D; Ware, Doreen; Jaiswal, Pankaj
2017-01-04
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Mycofier: a new machine learning-based classifier for fungal ITS sequences.
Delgado-Serrano, Luisa; Restrepo, Silvia; Bustos, Jose Ricardo; Zambrano, Maria Mercedes; Anzola, Juan Manuel
2016-08-11
The taxonomic and phylogenetic classification based on sequence analysis of the ITS1 genomic region has become a crucial component of fungal ecology and diversity studies. Nowadays, there is no accurate alignment-free classification tool for fungal ITS1 sequences for large environmental surveys. This study describes the development of a machine learning-based classifier for the taxonomical assignment of fungal ITS1 sequences at the genus level. A fungal ITS1 sequence database was built using curated data. Training and test sets were generated from it. A Naïve Bayesian classifier was built using features from the primary sequence with an accuracy of 87 % in the classification at the genus level. The final model was based on a Naïve Bayes algorithm using ITS1 sequences from 510 fungal genera. This classifier, denoted as Mycofier, provides similar classification accuracy compared to BLASTN, but the database used for the classification contains curated data and the tool, independent of alignment, is more efficient and contributes to the field, given the lack of an accurate classification tool for large data from fungal ITS1 sequences. The software and source code for Mycofier are freely available at https://github.com/ldelgado-serrano/mycofier.git .
Hamilton, John P; Neeno-Eckwall, Eric C; Adhikari, Bishwo N; Perna, Nicole T; Tisserat, Ned; Leach, Jan E; Lévesque, C André; Buell, C Robin
2011-01-01
The Comprehensive Phytopathogen Genomics Resource (CPGR) provides a web-based portal for plant pathologists and diagnosticians to view the genome and trancriptome sequence status of 806 bacterial, fungal, oomycete, nematode, viral and viroid plant pathogens. Tools are available to search and analyze annotated genome sequences of 74 bacterial, fungal and oomycete pathogens. Oomycete and fungal genomes are obtained directly from GenBank, whereas bacterial genome sequences are downloaded from the A Systematic Annotation Package (ASAP) database that provides curation of genomes using comparative approaches. Curated lists of bacterial genes relevant to pathogenicity and avirulence are also provided. The Plant Pathogen Transcript Assemblies Database provides annotated assemblies of the transcribed regions of 82 eukaryotic genomes from publicly available single pass Expressed Sequence Tags. Data-mining tools are provided along with tools to create candidate diagnostic markers, an emerging use for genomic sequence data in plant pathology. The Plant Pathogen Ribosomal DNA (rDNA) database is a resource for pathogens that lack genome or transcriptome data sets and contains 131 755 rDNA sequences from GenBank for 17 613 species identified as plant pathogens and related genera. Database URL: http://cpgr.plantbiology.msu.edu.
Simplified Metadata Curation via the Metadata Management Tool
NASA Astrophysics Data System (ADS)
Shum, D.; Pilone, D.
2015-12-01
The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.
78 FR 32302 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... burden estimate, or any other aspect of the information collection, including suggestion for reducing the... capture information related to Community Development Entity (CDE)/New Markets Tax Credit material events... indicate, through a series of specific questions, whether or not the event will have an impact on areas of...
This presentation will highlight known challenges with the production of high quality chemical databases and outline recent efforts made to address these challenges. Specific examples will be provided illustrating these challenges within the U.S. Environmental Protection Agency ...
Xenbase: Core features, data acquisition, and data processing.
James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D
2015-08-01
Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models. © 2015 Wiley Periodicals, Inc.
Lacy-Jones, Kristin; Hayward, Philip; Andrews, Steve; Gledhill, Ian; McAllister, Mark; Abrahamsson, Bertil; Rostami-Hodjegan, Amin; Pepin, Xavier
2017-03-01
The OrBiTo IMI project was designed to improve the understanding and modelling of how drugs are absorbed. To achieve this 13 pharmaceutical companies agreed to share biopharmaceutics drug properties and performance data, as long as they were able to hide certain aspects of their dataset if required. This data was then used in simulations to test how three in silico Physiological Based Pharmacokinetic (PBPK) tools performed. A unique database system was designed and implemented to store the drug data. The database system was unique, in that it had the ability to make different sections of a dataset visible or hidden depending on the stage of the project. Users were also given the option to hide identifying API attributes, to help prevent identification of project members from previously published data. This was achieved by applying blinding strategies to data parameters and the adoption of a unique numbering system. An anonymous communication tool was proposed to exchange comments about data, which enabled its curation and evolution. This paper describes the strategy adopted for numbering and blinding of the data, the tools developed to gather and search data as well as the tools used for communicating around the data with the aim of publicising the approach for other pre-competitive research between organisations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jimeno Yepes, Antonio; Verspoor, Karin
2014-01-01
As the cost of genomic sequencing continues to fall, the amount of data being collected and studied for the purpose of understanding the genetic basis of disease is increasing dramatically. Much of the source information relevant to such efforts is available only from unstructured sources such as the scientific literature, and significant resources are expended in manually curating and structuring the information in the literature. As such, there have been a number of systems developed to target automatic extraction of mutations and other genetic variation from the literature using text mining tools. We have performed a broad survey of the existing publicly available tools for extraction of genetic variants from the scientific literature. We consider not just one tool but a number of different tools, individually and in combination, and apply the tools in two scenarios. First, they are compared in an intrinsic evaluation context, where the tools are tested for their ability to identify specific mentions of genetic variants in a corpus of manually annotated papers, the Variome corpus. Second, they are compared in an extrinsic evaluation context based on our previous study of text mining support for curation of the COSMIC and InSiGHT databases. Our results demonstrate that no single tool covers the full range of genetic variants mentioned in the literature. Rather, several tools have complementary coverage and can be used together effectively. In the intrinsic evaluation on the Variome corpus, the combined performance is above 0.95 in F-measure, while in the extrinsic evaluation the combined recall performance is above 0.71 for COSMIC and above 0.62 for InSiGHT, a substantial improvement over the performance of any individual tool. Based on the analysis of these results, we suggest several directions for the improvement of text mining tools for genetic variant extraction from the literature. PMID:25285203
CottonGen: a genomics, genetics and breeding database for cotton research
USDA-ARS?s Scientific Manuscript database
CottonGen (http://www.cottongen.org) is a curated and integrated web-based relational database providing access to publicly available genomic, genetic and breeding data for cotton. CottonGen supercedes CottonDB and the Cotton Marker Database, with enhanced tools for easier data sharing, mining, vis...
The importance of data curation on QSAR Modeling - PHYSPROP open data as a case study. (QSAR 2016)
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These ...
Overview of the gene ontology task at BioCreative IV.
Mao, Yuqing; Van Auken, Kimberly; Li, Donghui; Arighi, Cecilia N; McQuilton, Peter; Hayman, G Thomas; Tweedie, Susan; Schaeffer, Mary L; Laulederkind, Stanley J F; Wang, Shur-Jen; Gobeill, Julien; Ruch, Patrick; Luu, Anh Tuan; Kim, Jung-Jae; Chiang, Jung-Hsien; Chen, Yu-De; Yang, Chia-Jung; Liu, Hongfang; Zhu, Dongqing; Li, Yanpeng; Yu, Hong; Emadzadeh, Ehsan; Gonzalez, Graciela; Chen, Jian-Ming; Dai, Hong-Jie; Lu, Zhiyong
2014-01-01
Gene ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
Effects of Agricultural Sales CDE Modules on Content Knowledge and Argumentation Skill
ERIC Educational Resources Information Center
Sapp, Sarah B.; Thoron, Andrew C.
2014-01-01
The purpose of this study was to determine the effects of the type of training module on argumentation skill, student content knowledge achievement, and performance in an agricultural sales practicum completed by secondary school agriculture students. Current research has concluded that most students do not possess the academic or transferable…
Residence and Race: 1619 to 2019. CDE Working Paper 88-19.
ERIC Educational Resources Information Center
Taeuber, Karl E.
In the United states, late in the twentieth century, racial separation prevails in family life, playgrounds, churches, and local community activities. Segregation of housing is a key mechanism for maintaining the subordinate status of blacks. Housing policies and practices have been a leading cause of the nation's decaying central cities and…
ERIC Educational Resources Information Center
Morgan, A. Christian; Fuhrman, Nicholas E.; King, Diana L.; Flanders, Frank B.; Rudd, Rick D.
2013-01-01
Agricultural science programs have provided many opportunities for leadership education through classroom, supervised agricultural experience (SAE), and FFA Organization activities. Past studies have focused on leadership developed through activities such as career development events (CDE), SAE activities, FFA Organization conventions, and other…
Teacher Educators: Addressing the Needs of All Learners
ERIC Educational Resources Information Center
Cook, Ellen
2017-01-01
This qualitative dissertation examines how teacher preparation programs take up policy messages from two state agencies. These questions guided the study: (1) What are the messages about RTI and MTSS from the California Department of Education [CDE] and the California Commission on Teacher Credentialing [CCTC]; and (2) How are RtI and MTSS taken…
NASA Astrophysics Data System (ADS)
Shih, Yu-Ling; Le, Trung; Rothfield, Lawrence
2003-06-01
The MinCDE proteins of Escherichia coli are required for proper placement of the division septum at midcell. The site selection process requires the rapid oscillatory redistribution of the proteins from pole to pole. We report that the three Min proteins are organized into extended membrane-associated coiled structures that wind around the cell between the two poles. The pole-to-pole oscillation of the proteins reflects oscillatory changes in their distribution within the coiled structure. We also report that the E. coli MreB protein, which is required for maintaining the rod shape of the cell, also forms extended coiled structures, which are similar to the MreB structures that have previously been reported in Bacillus subtilis. The MreB and MinCDE coiled arrays do not appear identical. The results suggest that at least two functionally distinct cytoskeletal-like elements are present in E. coli and that structures of this type can undergo dynamic changes that play important roles in division site placement and possibly other aspects of the life of the cell.
Conditioned inhibition in the spatial domain.
Sansa, J; Rodrigo, T; Santamaría, J J; Manteiga, R D; Chamizo, V D
2009-10-01
Using a variation on the standard procedure of conditioned inhibition (Trials A+ and AX-), rats (Rattus norvegicus) in a circular pool were trained to find a hidden platform that was located in a specific spatial position in relation to 2 individual landmarks (Trials A --> platform and B --> platform; Experiments 1a and 1b) and to 2 configurations of landmarks (Trials ABC --> platform and FGH --> platform; Experiment 2a). The rats also underwent inhibitory trials (Experiment 1: Trials AZ --> no platform; Experiment 2a: Trials CDE --> no platform) interspersed with these excitatory trials. In both experiments, subsequent test trials without the platform showed both a summation effect and retardation of excitatory conditioning, and in Experiment 2a rats learned to avoid the CDE quadrant over the course of the experiment. Two further experiments established that these results could not be attributed to any difference in salience between the conditioned inhibitors and the control stimuli. All these results contribute to the growing body of evidence consistent with the idea that there is a general mechanism of learning that is associative in nature. PsycINFO Database Record (c) 2009 APA, all rights reserved.
[Computer-assisted phacoemulsification for hard cataracts].
Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina
2012-01-01
to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.
Effects of thermal blooming on systems comprised of tiled subapertures
NASA Astrophysics Data System (ADS)
Leakeas, Charles L.; Bartell, Richard J.; Krizo, Matthew J.; Fiorino, Steven T.; Cusumano, Salvatore J.; Whiteley, Matthew R.
2010-04-01
Laser weapon systems comprise of tiled subapertures are rapidly emerging in the directed energy community. The Air Force Institute of Technology Center for Directed Energy (AFIT/CDE), under sponsorship of the HEL Joint Technology Office has developed performance models of such laser weapon system configurations consisting of tiled arrays of both slab and fiber subapertures. These performance models are based on results of detailed waveoptics analyses conducted using WaveTrain. Previous performance model versions developed in this effort represent system characteristics such as subaperture shape, aperture fill factor, subaperture intensity profile, subaperture placement in the primary aperture, subaperture mutual coherence (piston), subaperture differential jitter (tilt), and beam quality wave-front error associated with each subaperture. The current work is a prerequisite for the development of robust performance models for turbulence and thermal blooming effects for tiled systems. Emphasis is placed on low altitude tactical scenarios. The enhanced performance model developed will be added to AFIT/CDE's HELEEOS parametric one-on-one engagement level model via the Scaling for High Energy Laser and Relay Engagement (SHaRE) toolbox.
Callahan, Jill E; Munro, Cindy L; Kitten, Todd
2011-01-01
Streptococcus sanguinis is an important component of dental plaque and a leading cause of infective endocarditis. Genetic competence in S. sanguinis requires a quorum sensing system encoded by the early comCDE genes, as well as late genes controlled by the alternative sigma factor, ComX. Previous studies of Streptococcus pneumoniae and Streptococcus mutans have identified functions for the >100-gene com regulon in addition to DNA uptake, including virulence. We investigated this possibility in S. sanguinis. Strains deleted for the comCDE or comX master regulatory genes were created. Using a rabbit endocarditis model in conjunction with a variety of virulence assays, we determined that both mutants possessed infectivity equivalent to that of a virulent control strain, and that measures of disease were similar in rabbits infected with each strain. These results suggest that the com regulon is not required for S. sanguinis infective endocarditis virulence in this model. We propose that the different roles of the S. sanguinis, S. pneumoniae, and S. mutans com regulons in virulence can be understood in relation to the pathogenic mechanisms employed by each species.
Prakash, Aishwarya; Natarajan, Amarnath; Marky, Luis A.; Ouellette, Michel M.; Borgstahl, Gloria E. O.
2011-01-01
Replication protein A (RPA), a key player in DNA metabolism, has 6 single-stranded DNA-(ssDNA-) binding domains (DBDs) A-F. SELEX experiments with the DBDs-C, -D, and -E retrieve a 20-nt G-quadruplex forming sequence. Binding studies show that RPA-DE binds preferentially to the G-quadruplex DNA, a unique preference not observed with other RPA constructs. Circular dichroism experiments show that RPA-CDE-core can unfold the G-quadruplex while RPA-DE stabilizes it. Binding studies show that RPA-C binds pyrimidine- and purine-rich sequences similarly. This difference between RPA-C and RPA-DE binding was also indicated by the inability of RPA-CDE-core to unfold an oligonucleotide containing a TC-region 5′ to the G-quadruplex. Molecular modeling studies of RPA-DE and telomere-binding proteins Pot1 and Stn1 reveal structural similarities between the proteins and illuminate potential DNA-binding sites for RPA-DE and Stn1. These data indicate that DBDs of RPA have different ssDNA recognition properties. PMID:21772997
Argo: an integrative, interactive, text mining-based workbench supporting curation
Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia
2012-01-01
Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844
NASA Astrophysics Data System (ADS)
Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.
2016-12-01
There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.
Singhal, Ayush; Simmons, Michael; Lu, Zhiyong
2016-11-01
The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease-gene-variant) that overlapped with entries in UniProt and 5,384 triplets without overlap in UniProt. Analysis of the overlapping triplets and of a stratified sample of the non-overlapping triplets revealed accuracies of 93% and 80% for the respective categories (cumulative accuracy, 77%). We conclude that our process represents an important and broadly applicable improvement to the state of the art for curation of disease-gene-variant relationships.
Advanced Curation of Current and Future Extraterrestrial Samples
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2013-01-01
Curation of extraterrestrial samples is the critical interface between sample return missions and the international research community. Curation includes documentation, preservation, preparation, and distribution of samples. The current collections of extraterrestrial samples include: Lunar rocks / soils collected by the Apollo astronauts Meteorites, including samples of asteroids, the Moon, and Mars "Cosmic dust" (asteroid and comet particles) collected by high-altitude aircraft Solar wind atoms collected by the Genesis spacecraft Comet particles collected by the Stardust spacecraft Interstellar dust collected by the Stardust spacecraft Asteroid particles collected by the Hayabusa spacecraft These samples were formed in environments strikingly different from that on Earth. Terrestrial contamination can destroy much of the scientific significance of many extraterrestrial materials. In order to preserve the research value of these precious samples, contamination must be minimized, understood, and documented. In addition the samples must be preserved - as far as possible - from physical and chemical alteration. In 2011 NASA selected the OSIRIS-REx mission, designed to return samples from the primitive asteroid 1999 RQ36 (Bennu). JAXA will sample C-class asteroid 1999 JU3 with the Hayabusa-2 mission. ESA is considering the near-Earth asteroid sample return mission Marco Polo-R. The Decadal Survey listed the first lander in a Mars sample return campaign as its highest priority flagship-class mission, with sample return from the South Pole-Aitken basin and the surface of a comet among additional top priorities. The latest NASA budget proposal includes a mission to capture a 5-10 m asteroid and return it to the vicinity of the Moon as a target for future sampling. Samples, tools, containers, and contamination witness materials from any of these missions carry unique requirements for acquisition and curation. Some of these requirements represent significant advances over methods currently used. New analytical and screening techniques will increase the value of current sample collections. Improved web-based tools will make information on all samples more accessible to researchers and the public. Advanced curation of current and future extraterrestrial samples includes: Contamination Control - inorganic / organic Temperature of preservation - subfreezing / cryogenic Non-destructive preliminary examination - X-ray tomography / XRF mapping / Raman mapping Microscopic samples - handling / sectioning / transport Special samples - unopened lunar cores Informatics - online catalogs / community-based characterization.
Earley, Kirsty; Livingstone, Daniel; Rea, Paul M
2017-01-01
Collection preservation is essential for the cultural status of any city. However, presenting a collection publicly risks damage. Recently this drawback has been overcome by digital curation. Described here is a method of digitisation using photogrammetry and virtual reality software. Items were selected from the Royal College of Physicians and Surgeons of Glasgow archives, and implemented into an online learning module for the Open University. Images were processed via Agisoft Photoscan, Autodesk Memento, and Garden Gnome Object 2VR. Although problems arose due to specularity, 2VR digital models were developed for online viewing. Future research must minimise the difficulty of digitising specular objects.
Ponce-de-León, Miguel; Montero, Francisco; Peretó, Juli
2013-10-31
Metabolic reconstruction is the computational-based process that aims to elucidate the network of metabolites interconnected through reactions catalyzed by activities assigned to one or more genes. Reconstructed models may contain inconsistencies that appear as gap metabolites and blocked reactions. Although automatic methods for solving this problem have been previously developed, there are many situations where manual curation is still needed. We introduce a general definition of gap metabolite that allows its detection in a straightforward manner. Moreover, a method for the detection of Unconnected Modules, defined as isolated sets of blocked reactions connected through gap metabolites, is proposed. The method has been successfully applied to the curation of iCG238, the genome-scale metabolic model for the bacterium Blattabacterium cuenoti, obligate endosymbiont of cockroaches. We found the proposed approach to be a valuable tool for the curation of genome-scale metabolic models. The outcome of its application to the genome-scale model B. cuenoti iCG238 is a more accurate model version named as B. cuenoti iMP240.
Tse, L A; Yu, I T S; Leung, C C; Tam, W; Wong, T W
2007-01-01
Objectives To examine the exposure–response relationships between various indices of exposure to silica dust and the mortality from non‐malignant respiratory diseases (NMRDs) or chronic obstructive pulmonary diseases (COPDs) among a cohort of workers with silicosis in Hong Kong. Methods The concentrations of respirable silica dust were assigned to each industry and job task according to historical industrial hygiene measurements documented previously in Hong Kong. Exposure indices included cumulative dust exposure (CDE) and mean dust concentration (MDC). Penalised smoothing spline models were used as a preliminary step to detect outliers and guide further analyses. Multiple Cox's proportional hazard models were used to estimate the dust effects on the risk of mortality from NMRDs or COPDs after truncating the highest exposures. Results 371 of the 853 (43.49%) deaths occurring among 2789 workers with silicosis during 1981–99 were from NMRDs, and 101 (27.22%) NMRDs were COPDs. Multiple Cox's proportional hazard models showed that CDE (p = 0.009) and MDC (p<0.001) were significantly associated only with NMRD mortality. Subgroup analysis showed that deaths from NMRDs (p<0.01) and COPDs (p<0.05) were significantly associated with both CDE and MDC among underground caisson workers and among those ever employed in other occupations with high exposure to silica dust. No exposure–response relationship was observed for surface construction workers with low exposures. A clear upward trend for both NMRDs and COPDs mortality was found with increasing severity of radiological silicosis. Conclusion This study documented an exposure–response relationship between exposure to silica dust and the risk of death from NMRDs or COPDs among workers with silicosis, except for surface construction workers with low exposures. The risk of mortality from NMRDs increased significantly with the progression of International Labor Organization categories, independent of dust effects. PMID:16973737
Broglio, Steven P; Kontos, Anthony P; Levin, Harvey; Schneider, Kathryn; Wilde, Elisabeth A; Cantu, Robert C; Feddermann-Demont, Nina; Fuller, Gordon; Gagnon, Isabelle; Gioia, Gerry; Giza, Christopher C; Griesbach, Grace Sophia; Leddy, John J; Lipton, Michael L; Mayer, Andrew; McAllister, Thomas; McCrea, Michael; McKenzie, Lara; Putukian, Margot; Signoretti, Stefano; Suskauer, Stacy J; Tamburro, Robert; Turner, Michael; Yeates, Keith Owen; Zemek, Roger; Ala'i, Sherita; Esterlitz, Joy; Gay, Katelyn; Bellgowan, Patrick S F; Joseph, Kristen
2018-05-02
Through a partnership with the National Institute of Neurological Disorders and Stroke (NINDS), National Institutes of Health (NIH), and Department of Defense (DoD), the development of Sport-Related Concussion (SRC) Common Data Elements (CDEs) was initiated. The aim of this collaboration was to increase the efficiency and effectiveness of clinical research studies and clinical treatment outcomes, increase data quality, facilitate data sharing across studies, reduce study start-up time, more effectively aggregate information into metadata results, and educate new clinical investigators. The SRC CDE Working Group consisted of 34 worldwide experts in concussion from varied fields of related expertise, divided into three Subgroups: Acute (<72 hours post-concussion), Subacute (3 days-3 months post-concussion) and Persistent/Chronic (>3 months post-concussion). To develop CDEs, the Subgroups reviewed various domains, and then selected from, refined, and added to existing CDEs, case report forms and field-tested data elements from national registries and funded research studies. Recommendations were posted to the NINDS CDE Website for Public Review from February 2017 to April 2017. Following an internal Working Group review of recommendations, along with consideration of comments received from the Public Review period, the first iteration (Version 1.0) of the NINDS SRC CDEs was completed in June 2017. The recommendations include Core and Supplemental - Highly Recommended CDEs for cognitive data elements and symptom checklists, as well as other outcomes and endpoints (e.g., vestibular, oculomotor, balance, anxiety, depression) and sample case report forms (e.g., injury reporting, demographics, concussion history) for domains typically included in clinical research studies. The NINDS SRC CDEs and supporting documents are publicly available on the NINDS CDE website https://www.commondataelements.ninds.nih.gov/. Widespread use of CDEs by researchers and clinicians will facilitate consistent SRC clinical research and trial design, data sharing, and metadata retrospective analysis.
Zeng, Wei-Ping; McFarland, Margaret M; Zhou, Baohua; Holtfreter, Silva; Flesher, Susan; Cheung, Ambrose; Mallick, Avishek
2017-02-01
T H 2 responses are implicated in asthma pathobiology. Epidemiologic studies have found a positive association between asthma and exposure to staphylococcal enterotoxins. We used a mouse model of asthma to determine whether staphylococcal enterotoxins promote T H 2 differentiation of allergen-specific CD4 conventional T (Tcon) cells and asthma by activating allergen-nonspecific regulatory T (Treg) cells to create a T H 2-polarizing cytokine milieu. Ovalbumin (OVA)-specific, staphylococcal enterotoxin A (SEA)-nonreactive naive CD4 Tcon cells were cocultured with SEA-reactive allergen-nonspecific Treg or CD4 Tcon cells in the presence of OVA and SEA. The OVA-specific CD4 T cells were then analyzed for IL-13 and IFN-γ expression. SEA-activated Treg cells were analyzed for the expression of the T H 2-polarizing cytokine IL-4 and the T-cell activation markers CD69 and CD62L. For asthma induction, mice were intratracheally sensitized with OVA or cat dander extract (CDE) alone or together with SEA and then challenged with OVA or CDE. Mice were also subject to transient Treg cell depletion before sensitization with OVA plus SEA. Asthma features and T H 2 differentiation in these mice were analyzed. SEA-activated Treg cells induced IL-13 but suppressed IFN-γ expression in OVA-specific CD4 Tcon cells. SEA-activated Treg cells expressed IL-4, upregulated CD69, and downregulated CD62L. Sensitization with OVA plus SEA but not OVA alone induced asthma, and SEA exacerbated asthma induced by CDE. Depletion of Treg cells abolished these effects of SEA and IL-13 expression in OVA-specific T cells. SEA promoted T H 2 responses of allergen-specific T cells and asthma pathogenesis by activating Treg cells. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Iyer, Sapna; Park, Min-Jung; Moons, David; Kwan, Raymond; Liao, Jian; Liu, Li; Omary, M Bishr
2017-01-22
Acute pancreatitis has several underlying etiologies, and results in consequences ranging from mild to complex multi-organ failure. The wide range of pathology suggests a genetic predisposition for progression. We compared the susceptibility to acute pancreatitis in BALB/c and FVB/N mice, coupled with proteomic analysis, in order to identify potential protein associations with pancreatitis progression. Pancreatitis was induced in BALB/c and FVB/N mice by administration of cerulein or feeding a choline-deficient, ethionine-supplemented (CDE) diet. Histology and changes in serum amylase were examined. Proteome profiling in cerulein-treated mice was performed using 2-dimensional differential in gel electrophoresis (2D-DIGE) followed by mass spectrometry analysis and biochemical validation. Male and female FVB/N mice manifested more severe cerulein-induced pancreatitis as compared with BALB/c mice, but both strains were similarly susceptible to CDE-induced pancreatitis. Few of the 2D-DIGE alterations were validated by immunoblotting. Clusterin was markedly up-regulated after cerulein-induced pancreatitis in FVB/N but less-so in BALB/c mice. Pyrroline-5-carboxylate reductase (Pycr1), an enzyme involved in proline biosynthesis, had higher basal levels in FVB/N male and female mouse pancreata compared with BALB/c pancreata, and was relatively more resistant to degradation in FVB/N pancreata. However, serum and pancreas tissue proline levels were similar in the two strains. FVB/N is more susceptible than BALB/c mice to cerulein-induced but not CDE-induced pancreatitis. Most of the 2D-DIGE alterations in the two strains likely relate to posttranslational modifications rather than protein level differences. Clusterin levels increase dramatically in association with pancreatitis severity, while Pycr1 is higher in FVB/N versus BALB/c pancreata basally and after induction of pancreatitis. Changes in proline metabolism may represent a novel potential genetic modifier in the context of pancreatitis. Published by Elsevier Inc.
[Preliminary evaluation of the femtosecond laser-assisted cataract surgery in 300 cases].
Wang, Yong; Bao, Xianyi; Zhou, Yanli; Xu, Rong; Peng, Tingting; Sun, Ming; Cao, Danmin; He, Ling
2015-09-01
To evaluate the clinical outcome of the femtosecond laser-assisted cataract surgery (FLACS) in our first 300 cases. In this retrospective study, the study group comprised 300 cases (300 eyes) in which FLACS was done. The control group comprised 300 cases (300 eyes) in which phacoemulsification was performed. The steps of the FLACS included capsulotomy, lens fragmentation, corneal incisions, and creation of incisions within the peripheral cornea to aid the correction of pre-existing astigmatism. After the FLACS, 2.2-mm coaxial micro-incision phacoemulsification and implantation of an intraocular lens were operated. The preoperative best corrected visual acuity (BCVA) and postoperative uncorrected visual acuity (UCVA), the cumulative dissipated energy (CDE) of the phacoemulsification, and the parameters of the FLACS, including the docking time, the suction time and the laser time, were recorded. The complications of the FLACS were analyzed. The FLACS was successfully completed in 99.33% of the cases. The docking time was (24.6 ± 16.8) sec, the suction time was (101.27 ± 20.09) sec, and the laser time was (23.3 ± 5.5) sec. The most common complications of the FLACS included suction break (7/300, 2.33%), subconjunctival hemorrhage (58/300, 19.33%), pupillary constriction (47/300, 15.67%), incision at a wrong site (13/300, 4.33%), anterior capsular tag (17/300, 5.67%), decentration of the capsulorhexis (11/300, 3.67%), failure to split the lens nucleus (5/300, 1.67%), and posterior capsular ruptures (1/300, 0.33%). The CDE was 5.52 ± 5.18 in the FLACS group and 8.37 ± 7.91 in the traditional phaco group (P < 0.05). The UCVA was 0.12 ± 0.08 and 0.13 ± 0.11 at 1 month after the FLACS and traditional phaco, respectively (P > 0.05). Compared with the conventional phacoemulsification surgery, the FLACS can achieve less CDE and better early postoperative visual acuity. Long-term effects remain to be investigated.
Fakhry, Mohamed A; El Shazly, Malak I
2011-01-01
To compare torsional versus combined torsional and conventional ultrasound modes in hard cataract surgery regarding ultrasound energy and time and effect on corneal endothelium. Kasr El Aini hospital, Cairo University, and International Eye Hospital, Cairo, Egypt. Ninety-eight eyes of 63 patients were enrolled in this prospective comparative randomized masked clinical study. All eyes had nuclear cataracts of grades III and IV using the Lens Opacities Classification System III (LOCS III). Two groups were included, each having an equal number of eyes (49). The treatment for group A was combined torsional and conventional US mode phacoemulsification, and for group B torsional US mode phacoemulsification only. Pre- and post-operative assessments included best corrected visual acuity (BCVA), intraocular pressure (IOP), slit-lamp evaluation, and fundoscopic evaluation. Endothelial cell density (ECD) and central corneal thickness (CCT) were measured preoperatively, 1 day, 7 days, and 1 month postoperatively. All eyes were operated on using the Alcon Infiniti System (Alcon, Fort Worth, TX) with the quick chop technique. All eyes were implanted with AcrySof SA60AT (Alcon) intraocular lens (IOL). The main phaco outcome parameters included the mean ultrasound time (UST), the mean cumulative dissipated energy (CDE), and the percent of average torsional amplitude in position 3 (%TUSiP3). Improvement in BCVA was statistically significant in both groups (P < 0.001). Comparing UST and CDE for both groups revealed results favoring the pure torsional group (P = 0.002 and P < 0.001 for UST; P = 0.058 and P = 0.009 for CDE). As for %TUSiP3, readings were higher for the pure torsional group (P = 0.03 and P = 0.01). All changes of CCT, and ECD over time were found statistically significant using one-way ANOVA testing (P < 0.001). Both modes are safe in hard cataract surgery, however the pure torsional mode showed less US energy used.
Fakhry, Mohamed A; Shazly, Malak I El
2011-01-01
Purpose To compare torsional versus combined torsional and conventional ultrasound modes in hard cataract surgery regarding ultrasound energy and time and effect on corneal endothelium. Settings Kasr El Aini hospital, Cairo University, and International Eye Hospital, Cairo, Egypt. Methodology Ninety-eight eyes of 63 patients were enrolled in this prospective comparative randomized masked clinical study. All eyes had nuclear cataracts of grades III and IV using the Lens Opacities Classification System III (LOCS III). Two groups were included, each having an equal number of eyes (49). The treatment for group A was combined torsional and conventional US mode phacoemulsification, and for group B torsional US mode phacoemulsification only. Pre- and post-operative assessments included best corrected visual acuity (BCVA), intraocular pressure (IOP), slit-lamp evaluation, and fundoscopic evaluation. Endothelial cell density (ECD) and central corneal thickness (CCT) were measured preoperatively, 1 day, 7 days, and 1 month postoperatively. All eyes were operated on using the Alcon Infiniti System (Alcon, Fort Worth, TX) with the quick chop technique. All eyes were implanted with AcrySof SA60AT (Alcon) intraocular lens (IOL). The main phaco outcome parameters included the mean ultrasound time (UST), the mean cumulative dissipated energy (CDE), and the percent of average torsional amplitude in position 3 (%TUSiP3). Results Improvement in BCVA was statistically significant in both groups (P < 0.001). Comparing UST and CDE for both groups revealed results favoring the pure torsional group (P = 0.002 and P < 0.001 for UST; P = 0.058 and P = 0.009 for CDE). As for %TUSiP3, readings were higher for the pure torsional group (P = 0.03 and P = 0.01). All changes of CCT, and ECD over time were found statistically significant using one-way ANOVA testing (P < 0.001). Conclusion Both modes are safe in hard cataract surgery, however the pure torsional mode showed less US energy used. PMID:21792288
Zhou, Yihui; Yin, Ge; Asplund, Lillemor; Stewart, Kathryn; Rantakokko, Panu; Bignert, Anders; Ruokojärvi, Päivi; Kiviranta, Hannu; Qiu, Yanling; Ma, Zhijun; Bergman, Åke
2017-06-01
Polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are highly toxic to humans and wildlife. In the present study, PCDD/Fs were analyzed in the eggs of whiskered terns (Chlidonias hybrida), and genetically identified eggs from black-crowned night herons (Nycticorax nycticorax) sampled from two lakes in the Yangtze River Delta area, China. The median toxic equivalent (TEQ) of PCDD/Fs were 280 (range: 95-1500) and 400 (range: 220-1100) pg TEQ g -1 lw (WHO, 1998 for birds) in the eggs of black-crowned night heron and whiskered tern, respectively. Compared to known sources, concentrations of PCDDs relative to the sum of PCDD/Fs in bird eggs, demonstrated high abundance of octachlorodibenzo-p-dioxin (OCDD), 1,2,3,4,6,7,8-heptaCDD and 1,2,3,6,7,8-hexaCDD indicating pentachlorophenol (PCP), and/or sodium pentachlorophenolate (Na-PCP) as significant sources of the PCDD/Fs. The presence of polychlorinated diphenyl ethers (PCDEs), hydroxylated and methoxylated polychlorinated diphenyl ethers (OH- and MeO-PCDEs, known impurities in PCP products), corroborates this hypothesis. Further, significant correlations were found between the predominant congener CDE-206, 3'-OH-CDE-207, 2'-MeO-CDE-206 and OCDD, indicating a common origin. Eggs from the two lakes are sometimes used for human consumption. The WHO health-based tolerable intake of PCDD/Fs is exceeded if eggs from the two lakes are consumed regularly on a weekly basis, particularly for children. The TEQs extensively exceed maximum levels for PCDD/Fs in hen eggs and egg products according to EU legislation (2.5 pg TEQ g -1 lw). The results suggest immediate action should be taken to manage the contamination, and further studies evaluating the impacts of egg consumption from wild birds in China. Likewise, studies on dioxins and other POPs in common eggs need to be initiated around China. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhang, Xuesheng; Wang, Tantan; Gao, Lei; Feng, Mingbao; Qin, Li; Shi, Jiaqi; Cheng, Danru
2018-05-31
Polychlorinated diphenyl ethers (PCDEs) are typical halogenated aromatic pollutants that have shown various toxicological effects on organisms. However, the contamination status of PCDEs in the fresh water lakes of China remains poorly researched. In this study, the levels of 15 congeners of PCDEs in the sediments, suspended particulate matter (SPM) and water of Chaohu Lake were determined. The results showed that the ranges of concentrations of total PCDEs (ΣPCDEs) in the sediment, SPM and water were 0.279 ng g -1 dry weight (d.w.)-2.474 ng g -1 d.w., 0.331 ng g -1 d.w.-2.013 ng g -1 d.w. and 0.351 ng L -1 -2.021 ng L -1 , respectively. The most abundant congeners found in sediments, SPM and water were 3,3',4,4'-tetra-CDE, deca-CDE and 2,4,6-tri-CDE, with average contributive ratios of 17.36%, 15.48% and 20.63%, respectively. The medium and higher chlorinated PCDEs (e.g., penta- and deca-CDEs) were the dominant congeners in sediments and SPM. The percentages of lower chlorinated PCDEs (e.g., tri-CDEs) in the water were higher than those in the sediments. The combined input of ΣPCDEs from the eight main tributaries to Chaohu Lake was estimated at 6.94 kg y -1 . Strong linear correlations between the concentrations of ΣPCDEs and organic carbon (OC) contents in three type samples from Chaohu Lake suggested OC could influence the distribution of PCDEs in Chaohu Lake substantially. In addition, the calculated average organic carbon normalized partition coefficients (logK oc ) of 15 PCDEs between water and SPM were in the range of 4.55-5.45 mL g -1 . This study confirmed that Chaohu Lake is contaminated by PCDEs. Copyright © 2018 Elsevier Ltd. All rights reserved.
GMODWeb: a web framework for the generic model organism database
O'Connor, Brian D; Day, Allen; Cain, Scott; Arnaiz, Olivier; Sperling, Linda; Stein, Lincoln D
2008-01-01
The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of model organism database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from . PMID:18570664
ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments
NASA Astrophysics Data System (ADS)
Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.
2017-12-01
The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access
Apollo: a sequence annotation editor
Lewis, SE; Searle, SMJ; Harris, N; Gibson, M; Iyer, V; Richter, J; Wiel, C; Bayraktaroglu, L; Birney, E; Crosby, MA; Kaminker, JS; Matthews, BB; Prochnik, SE; Smith, CD; Tupy, JL; Rubin, GM; Misra, S; Mungall, CJ; Clamp, ME
2002-01-01
The well-established inaccuracy of purely computational methods for annotating genome sequences necessitates an interactive tool to allow biological experts to refine these approximations by viewing and independently evaluating the data supporting each annotation. Apollo was developed to meet this need, enabling curators to inspect genome annotations closely and edit them. FlyBase biologists successfully used Apollo to annotate the Drosophila melanogaster genome and it is increasingly being used as a starting point for the development of customized annotation editing tools for other genome projects. PMID:12537571
Freytag, Saskia; Burgess, Rosemary; Oliver, Karen L; Bahlo, Melanie
2017-06-08
The pathogenesis of neurological and mental health disorders often involves multiple genes, complex interactions, as well as brain- and development-specific biological mechanisms. These characteristics make identification of disease genes for such disorders challenging, as conventional prioritisation tools are not specifically tailored to deal with the complexity of the human brain. Thus, we developed a novel web-application-brain-coX-that offers gene prioritisation with accompanying visualisations based on seven gene expression datasets in the post-mortem human brain, the largest such resource ever assembled. We tested whether our tool can correctly prioritise known genes from 37 brain-specific KEGG pathways and 17 psychiatric conditions. We achieved average sensitivity of nearly 50%, at the same time reaching a specificity of approximately 75%. We also compared brain-coX's performance to that of its main competitors, Endeavour and ToppGene, focusing on the ability to discover novel associations. Using a subset of the curated SFARI autism gene collection we show that brain-coX's prioritisations are most similar to SFARI's own curated gene classifications. brain-coX is the first prioritisation and visualisation web-tool targeted to the human brain and can be freely accessed via http://shiny.bioinf.wehi.edu.au/freytag.s/ .
Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A.; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G.; Flicek, Paul; Bontrop, Ronald E.; Hammond, John A.; Marsh, Steven G. E.
2017-01-01
The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. PMID:27899604
Integrative Functional Genomics for Systems Genetics in GeneWeaver.org.
Bubier, Jason A; Langston, Michael A; Baker, Erich J; Chesler, Elissa J
2017-01-01
The abundance of existing functional genomics studies permits an integrative approach to interpreting and resolving the results of diverse systems genetics studies. However, a major challenge lies in assembling and harmonizing heterogeneous data sets across species for facile comparison to the positional candidate genes and coexpression networks that come from systems genetic studies. GeneWeaver is an online database and suite of tools at www.geneweaver.org that allows for fast aggregation and analysis of gene set-centric data. GeneWeaver contains curated experimental data together with resource-level data such as GO annotations, MP annotations, and KEGG pathways, along with persistent stores of user entered data sets. These can be entered directly into GeneWeaver or transferred from widely used resources such as GeneNetwork.org. Data are analyzed using statistical tools and advanced graph algorithms to discover new relations, prioritize candidate genes, and generate function hypotheses. Here we use GeneWeaver to find genes common to multiple gene sets, prioritize candidate genes from a quantitative trait locus, and characterize a set of differentially expressed genes. Coupling a large multispecies repository curated and empirical functional genomics data to fast computational tools allows for the rapid integrative analysis of heterogeneous data for interpreting and extrapolating systems genetics results.
Jiang, Xiangying; Ringwald, Martin; Blake, Judith; Shatkay, Hagit
2017-01-01
The Gene Expression Database (GXD) is a comprehensive online database within the Mouse Genome Informatics resource, aiming to provide available information about endogenous gene expression during mouse development. The information stems primarily from many thousands of biomedical publications that database curators must go through and read. Given the very large number of biomedical papers published each year, automatic document classification plays an important role in biomedical research. Specifically, an effective and efficient document classifier is needed for supporting the GXD annotation workflow. We present here an effective yet relatively simple classification scheme, which uses readily available tools while employing feature selection, aiming to assist curators in identifying publications relevant to GXD. We examine the performance of our method over a large manually curated dataset, consisting of more than 25 000 PubMed abstracts, of which about half are curated as relevant to GXD while the other half as irrelevant to GXD. In addition to text from title-and-abstract, we also consider image captions, an important information source that we integrate into our method. We apply a captions-based classifier to a subset of about 3300 documents, for which the full text of the curated articles is available. The results demonstrate that our proposed approach is robust and effectively addresses the GXD document classification. Moreover, using information obtained from image captions clearly improves performance, compared to title and abstract alone, affirming the utility of image captions as a substantial evidence source for automatically determining the relevance of biomedical publications to a specific subject area. www.informatics.jax.org. © The Author(s) 2017. Published by Oxford University Press.
Construction and curation of a large ecotoxicological dataset for the EcoTTC
The Ecological Threshold for Toxicological Concern, or ecoTTC, has been proposed as a natural next step to the well-known human safety TTC concept. The ecoTTC is particularly suited for use as an early screening tool in the risk assessment process, in situations where chemical h...
Alex Wiedenhoeft
2014-01-01
Wood is perhaps the quintessential material used in most human cultures. In prehistoric times, it was employed, either directly or indirectly, to provide most human needs: warmth, shelter, and nearly all manner of tools suitable for procuring food, water and security (Beeckman, 2003). In modern times it continues to play important roles in human cultures (Radkau, 2012...
An overview of the biocreative 2012 workshop track III: Interactive text mining task
USDA-ARS?s Scientific Manuscript database
An important question is how to make use of text mining to enhance the biocuration workflow. A number of groups have developed tools for text mining from a computer science/linguistics perspective and there are many initiatives to curate some aspect of biology from the literature. In some cases the ...
Ocean Drilling Program: TAMRF Administrative Services: Meeting, Travel, and
Port-Call Information ODP/TAMU Science Operator Home Mirror sites ODP/TAMU staff Cruise information Science and curation services Publication services and products Drilling services and tools Online ODP Meeting, Travel, and Port-Call Information All ODP meeting and port-call activities are complete
Exploring Curation as a Core Competency in Digital and Media Literacy Education
ERIC Educational Resources Information Center
Mihailidis, Paul; Cohen, James N.
2013-01-01
In today's hypermedia landscape, youth and young adults are increasingly using social media platforms, online aggregators and mobile applications for daily information use. Communication educators, armed with a host of free, easy-to-use online tools, have the ability to create dynamic approaches to teaching and learning about information and…
MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies.
Kumar, Pankaj; Halama, Anna; Hayat, Shahina; Billing, Anja M; Gupta, Manish; Yousri, Noha A; Smith, Gregory M; Suhre, Karsten
2015-01-01
The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control) to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments). Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA), Biosample, Bioprojects, and Gene Expression Omnibus (GEO). Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe "MetaRNA-Seq," a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.
PhytoPath: an integrative resource for plant pathogen genomics.
Pedro, Helder; Maheswari, Uma; Urban, Martin; Irvine, Alistair George; Cuzick, Alayne; McDowall, Mark D; Staines, Daniel M; Kulesha, Eugene; Hammond-Kosack, Kim Elizabeth; Kersey, Paul Julian
2016-01-04
PhytoPath (www.phytopathdb.org) is a resource for genomic and phenotypic data from plant pathogen species, that integrates phenotypic data for genes from PHI-base, an expertly curated catalog of genes with experimentally verified pathogenicity, with the Ensembl tools for data visualization and analysis. The resource is focused on fungi, protists (oomycetes) and bacterial plant pathogens that have genomes that have been sequenced and annotated. Genes with associated PHI-base data can be easily identified across all plant pathogen species using a BioMart-based query tool and visualized in their genomic context on the Ensembl genome browser. The PhytoPath resource contains data for 135 genomic sequences from 87 plant pathogen species, and 1364 genes curated for their role in pathogenicity and as targets for chemical intervention. Support for community annotation of gene models is provided using the WebApollo online gene editor, and we are working with interested communities to improve reference annotation for selected species. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
A Genome-Scale Metabolic Reconstruction of Mycoplasma genitalium, iPS189
Suthers, Patrick F.; Dasika, Madhukar S.; Kumar, Vinay Satish; Denisov, Gennady; Glass, John I.; Maranas, Costas D.
2009-01-01
With a genome size of ∼580 kb and approximately 480 protein coding regions, Mycoplasma genitalium is one of the smallest known self-replicating organisms and, additionally, has extremely fastidious nutrient requirements. The reduced genomic content of M. genitalium has led researchers to suggest that the molecular assembly contained in this organism may be a close approximation to the minimal set of genes required for bacterial growth. Here, we introduce a systematic approach for the construction and curation of a genome-scale in silico metabolic model for M. genitalium. Key challenges included estimation of biomass composition, handling of enzymes with broad specificities, and the lack of a defined medium. Computational tools were subsequently employed to identify and resolve connectivity gaps in the model as well as growth prediction inconsistencies with gene essentiality experimental data. The curated model, M. genitalium iPS189 (262 reactions, 274 metabolites), is 87% accurate in recapitulating in vivo gene essentiality results for M. genitalium. Approaches and tools described herein provide a roadmap for the automated construction of in silico metabolic models of other organisms. PMID:19214212
OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4.
Schober, Daniel; Tudose, Ilinca; Svatek, Vojtech; Boeker, Martin
2012-09-21
Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers.
Single Mothers, the Underclass, and Social Policy. CDE Working Paper 88-30.
ERIC Educational Resources Information Center
McLanahan, Sara; Garfinkel, Irv
Although the vast majority of single mothers do not fit the description of an underclass, there is a small group of predominantly black single mothers concentrated in northern urban ghettos that is persistently weakly attached to the labor force, socially isolated, and reproducing itself. Although welfare programs are necessary for those who are…
ERIC Educational Resources Information Center
Powers, Kristin M.; Hagans-Murillo, Kristi S.; Restori, Alberto F.
2004-01-01
In this article, major laws, regulations, court cases, policies and practices related to intelligence testing of African American students in California are reviewed. A California Department of Education (CDE) ban on intelligence testing of African American students for the purpose of determining special education eligibility is in effect and…
Children Designing & Engineering: Contextual Learning Units in Primary Design and Technology
ERIC Educational Resources Information Center
Hutchinson, Patricia
2002-01-01
The Children Designing & Engineering (CD&E) Project at the College of New Jersey is a collaborative effort of the College's Center for Design and Technology and the New Jersey Chamber of Commerce. The Project, funded by the National Science Foundation (NSF), has been charged to develop instructional materials for grades K-5. The twelve…
Guidebook for the California Healthy Kids Survey. Part I: Administration. 2004-2005 Edition
ERIC Educational Resources Information Center
Austin, G.; Duerr, M.
2004-01-01
The California Healthy Kids Survey (CHKS) is a comprehensive youth health risk and resilience data collection service for local education agencies (LEAs) sponsored by the California Department of Education (CDE). It is an easily customized, comprehensive self-report youth survey for grades 5-12. It assesses all major areas of health-related risk…
NASA Astrophysics Data System (ADS)
Kreft, J.
2015-12-01
I work to build systems that make environmental data more accessible and usable for others—a role that I love and, ten years ago, would not have guessed I would play. I transitioned from conducting pure research to learning more about data curation and information science, and eventually, to combining knowledge of both the research and data science worlds in my current position at the U.S. Geological Survey Center for Integrated Data Analytics (USGS CIDA). At the USGS, I primarily work on the Water Quality Portal, an interagency tool for providing high performance, standards driven access to water quality data, and the USGS Publications Warehouse, which plays a key and ever expanding role in providing access to USGS Publications and their associated data sets. Both projects require an overarching focus on building services to make science data more visible and accessible to users. In addition, listening to the needs of the research scientists who are both collecting and using the data to improve the tools I guide the development of. Concepts that I learned at the University Of Illinois at Urbana-Champaign Graduate School of Library and Information Science Data Curation Education Program were critical to a successful transition from the research world to the data science world. Data curation and data science are playing an ever-larger role in surmounting current and future data challenges at the USGS, and the need for people with interests in both research and data science will continue to grow.
Pore-scale dynamics of salt transport in drying porous media
NASA Astrophysics Data System (ADS)
Shokri, N.
2013-12-01
Understanding the physics of water evaporation from saline porous media is important in many hydrological processes such as land-atmosphere interactions, water management, vegetation, soil salinity, and mineral-fluid interactions. We applied synchrotron x-ray micro-tomography to investigate the pore-scale dynamics of dissolved salt distribution in a three dimensional drying saline porous media using a cylindrical plastic column (15 mm in height and 8 mm in diameter) packed with sand particles saturated with CaI2 solution (5% concentration by mass) with a spatial and temporal resolution of 12 microns and 30 min, respectively. Every time the drying sand column was set to be imaged, two different images were recorded using distinct synchrotron X-rays energies immediately above (33.2690 keV) and below (33.0690 keV) the K-edge value of Iodine (33.1694 keV). Taking the difference between pixel gray values enabled us to delineate the spatial and temporal distribution of CaI2 concentration at pore scale. The experiment was continued for 12 hours. Results indicate that during early stages of evaporation, air preferentially invades large pores at the surface while finer pores remain saturated and connected to the wet zone at bottom via capillary-induced liquid flow. Consequently, the salt concentration increases preferentially in finer pores where evaporation occurs. The Peclet number (describing the competition between convection and diffusion) was greater than one in our experiment resulting in higher salt concentrations closer to the evaporation surface indicating a convection-driven process. The obtained salt profiles were used to evaluate the numerical solution of the convection-diffusion equation (CDE). Results show that the macro-scale CDE could capture the overall trend of the measured salt profiles but fail to produce the exact slope of the profiles. Our results shed new insight on the physics of salt transport and its complex dynamics in drying porous media and establish synchrotron x-ray micro-tomography as an effective tool to investigate the dynamics of dissolved salt transport in porous media with high spatial and temporal resolutions.
Evaluating the Interdisciplinary Discoverability of Data
NASA Astrophysics Data System (ADS)
Gordon, S.; Habermann, T.
2017-12-01
Documentation needs are similar across communities. Communities tend to agree on many of the basic concepts necessary for discovery. Shared concepts such as a title or a description of the data exist in most metadata dialects. Many dialects have been designed and recommendations implemented to create metadata valuable for data discovery. These implementations can create barriers to discovering the right data. How can we ensure that the documentation we curate will be discoverable and understandable by researchers outside of our own disciplines and organizations? Since communities tend to use and understand many of the same documentation concepts, the barriers to interdisciplinary discovery are caused by the differences in the implementation. Thus tools and methods designed for the conceptual layer that evaluate records for documentation concepts, regardless of the dialect, can be effective in identifying opportunities for improvement and providing guidance. The Metadata Evaluation Web Service combined with a Jupyter Notebook interface allows a user to gather insight about a collection of records with respect to different communities' conceptual recommendations. It accomplishes this via data visualizations and provides links to implementation specific guidance on the ESIP Wiki for each recommendation applied to the collection. By utilizing these curation tools as part of an iterative process the data's impact can be increased by making it discoverable to a greater scientific and research community. Due to the conceptual focus of the methods and tools used, they can be utilized by any community or organization regardless of their documentation dialect or tools.
A crystallographic perspective on sharing data and knowledge
NASA Astrophysics Data System (ADS)
Bruno, Ian J.; Groom, Colin R.
2014-10-01
The crystallographic community is in many ways an exemplar of the benefits and practices of sharing data. Since the inception of the technique, virtually every published crystal structure has been made available to others. This has been achieved through the establishment of several specialist data centres, including the Cambridge Crystallographic Data Centre, which produces the Cambridge Structural Database. Containing curated structures of small organic molecules, some containing a metal, the database has been produced for almost 50 years. This has required the development of complex informatics tools and an environment allowing expert human curation. As importantly, a financial model has evolved which has, to date, ensured the sustainability of the resource. However, the opportunities afforded by technological changes and changing attitudes to sharing data make it an opportune moment to review current practices.
Li, Zhao; Li, Jin; Yu, Peng
2018-01-01
Abstract Metadata curation has become increasingly important for biological discovery and biomedical research because a large amount of heterogeneous biological data is currently freely available. To facilitate efficient metadata curation, we developed an easy-to-use web-based curation application, GEOMetaCuration, for curating the metadata of Gene Expression Omnibus datasets. It can eliminate mechanical operations that consume precious curation time and can help coordinate curation efforts among multiple curators. It improves the curation process by introducing various features that are critical to metadata curation, such as a back-end curation management system and a curator-friendly front-end. The application is based on a commonly used web development framework of Python/Django and is open-sourced under the GNU General Public License V3. GEOMetaCuration is expected to benefit the biocuration community and to contribute to computational generation of biological insights using large-scale biological data. An example use case can be found at the demo website: http://geometacuration.yubiolab.org. Database URL: https://bitbucket.com/yubiolab/GEOMetaCuration PMID:29688376
Script towards Research 2.0: The Influence of Digital and Online Tools in Academic Research
ERIC Educational Resources Information Center
Grosseck, Gabriela; Bran, Ramona
2016-01-01
The new Internet technologies have infiltrated in a stunning way the academic environment, both at individual and at institutional level. Therefore, more and more teachers have started educational blogs, librarians are active on Twitter, other educational actors curate web content, students post on Instagram or Flickr, and university departments…
MaizeGDB update: New tools, data, and interface for the maize model organism database
USDA-ARS?s Scientific Manuscript database
MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...
Toward Lower Organic Environments in Astromaterial Sample Curation for Diverse Collections
NASA Technical Reports Server (NTRS)
Allton, J. H.; Allen, C. C.; Burkett, P. J.; Calaway, M. J.; Oehler, D. Z.
2012-01-01
Great interest was taken during the frenzied pace of the Apollo lunar sample return to achieve and monitor organic cleanliness. Yet, the first mission resulted in higher organic contamination to samples than desired. But improvements were accomplished by Apollo 12 [1]. Quarantine complicated the goal of achieving organic cleanliness by requiring negative pressure glovebox containment environments, proximity of animal, plant and microbial organic sources, and use of organic sterilants in protocols. A special low organic laboratory was set up at University of California Berkeley (UCB) to cleanly subdivide a subset of samples [2, 3, 4]. Nevertheless, the basic approach of handling rocks and regolith inside of a positive pressure stainless steel glovebox and restrict-ing the tool and container materials allowed in the gloveboxes was established by the last Apollo sample re-turn. In the last 40 years, the collections have grown to encompass Antarctic meteorites, Cosmic Dust, Genesis solar wind, Stardust comet grains and Hayabusa asteroid grains. Each of these collections have unique curation requirements for organic contamination monitor-ing and control. Here is described some changes allowed by improved technology or driven by changes in environmental regulations and economy, concluding with comments on organic witness wafers. Future sample return missions (OSIRIS-Rex; Mars; comets) will require extremely low levels of organic contamination in spacecraft collection and thus similarly low levels in curation. JSC Curation is undertaking a program to document organic baseline levels in current operations and devise ways to reduce those levels.
Children's Representation of Symbolic and Nonsymbolic Magnitude Examined with the Priming Paradigm
ERIC Educational Resources Information Center
Defever, Emmy; Sasanguie, Delphine; Gebuis, Titia; Reynvoet, Bert
2011-01-01
How people process and represent magnitude has often been studied using number comparison tasks. From the results of these tasks, a comparison distance effect (CDE) is generated, showing that it is easier to discriminate two numbers that are numerically further apart (e.g., 2 and 8) compared with numerically closer numbers (e.g., 6 and 8).…
Analytical Concept: Development of a Multinational Information Strategy
2008-10-31
16 1.4.3 Training and Mentoring/ Coaching ............................................................. 16 1.5 Analysis Requirements...America. These priority focus areas will become subject to experimentation in a number of consecutive phases of the 2008 Major Integrating Event ( MIE ...factor in general, in the MNE 5 CD&E program, focused on supporting concept validation in the 2008 MIE . The Analytical Concept outlines processes and
CAP4K Teacher Tour: Aligning State-Level Support with Classroom-Level Needs
ERIC Educational Resources Information Center
Colorado Department of Education, 2009
2009-01-01
In January 2009, the Colorado Department of Education (CDE) and the Colorado Education Association (CEA) initiated a 13-city teacher tour to engage teachers in a statewide discussion about CAP4K, its relevance to practice, its impact on teaching and learning and the kind of help that teachers would find useful for classroom implementation. Between…
Callahan, Jill E.; Munro, Cindy L.; Kitten, Todd
2011-01-01
Streptococcus sanguinis is an important component of dental plaque and a leading cause of infective endocarditis. Genetic competence in S. sanguinis requires a quorum sensing system encoded by the early comCDE genes, as well as late genes controlled by the alternative sigma factor, ComX. Previous studies of Streptococcus pneumoniae and Streptococcus mutans have identified functions for the >100-gene com regulon in addition to DNA uptake, including virulence. We investigated this possibility in S. sanguinis. Strains deleted for the comCDE or comX master regulatory genes were created. Using a rabbit endocarditis model in conjunction with a variety of virulence assays, we determined that both mutants possessed infectivity equivalent to that of a virulent control strain, and that measures of disease were similar in rabbits infected with each strain. These results suggest that the com regulon is not required for S. sanguinis infective endocarditis virulence in this model. We propose that the different roles of the S. sanguinis, S. pneumoniae, and S. mutans com regulons in virulence can be understood in relation to the pathogenic mechanisms employed by each species. PMID:22039480
A Comparison of Three Different Thick Epinucleus Removal Techniques in Cataract Surgery.
Hwang, Ho Sik; Lim, Byung-Su; Kim, Man Soo; Kim, Eun Chul
2017-01-01
To compare the outcomes of cataract surgery performed with three different types of the epinucleus removal techniques (safe boat, infusion/aspiration (I/A) cannulas, and phacoemulsification tip). Ninety eyes with thick adhesive epinuclei were randomly subdivided into three groups according to epinucleus removal technique: epinucleus floating (safe boat) technique, 30 patients; I/A tip, 30 patients; and phaco tip, 30 patients. Intraoperative measurements included ultrasound time (UST), mean cumulative dissipated ultrasound energy (CDE), and balanced salt solution (BSS) use. Clinical measurements were made preoperatively, and at one day, one month and two months postoperatively, including the best corrected visual acuity (BCVA), the central corneal thickness (CCT), and the endothelial cell count (ECC). Intraoperative measurements showed significantly less UST, CDE, and BSS use in the safe boat group than in the phaco tip groups (p < 0.05). The percentage of endothelial cell loss in the safe boat group was significantly lower than that in the phaco tip groups at two months post-cataract surgery (p < 0.05). The safe boat technique is a safer and more effective epinucleus removal technique than phaco tip techniques in cases with thick epinucleus.
SABIO-RK: an updated resource for manually curated biochemical reaction kinetics
Rey, Maja; Weidemann, Andreas; Kania, Renate; Müller, Wolfgang
2018-01-01
Abstract SABIO-RK (http://sabiork.h-its.org/) is a manually curated database containing data about biochemical reactions and their reaction kinetics. The data are primarily extracted from scientific literature and stored in a relational database. The content comprises both naturally occurring and alternatively measured biochemical reactions and is not restricted to any organism class. The data are made available to the public by a web-based search interface and by web services for programmatic access. In this update we describe major improvements and extensions of SABIO-RK since our last publication in the database issue of Nucleic Acid Research (2012). (i) The website has been completely revised and (ii) allows now also free text search for kinetics data. (iii) Additional interlinkages with other databases in our field have been established; this enables users to gain directly comprehensive knowledge about the properties of enzymes and kinetics beyond SABIO-RK. (iv) Vice versa, direct access to SABIO-RK data has been implemented in several systems biology tools and workflows. (v) On request of our experimental users, the data can be exported now additionally in spreadsheet formats. (vi) The newly established SABIO-RK Curation Service allows to respond to specific data requirements. PMID:29092055
Updated regulation curation model at the Saccharomyces Genome Database
Engel, Stacia R; Skrzypek, Marek S; Hellerstedt, Sage T; Wong, Edith D; Nash, Robert S; Weng, Shuai; Binkley, Gail; Sheppard, Travis K; Karra, Kalpana; Cherry, J Michael
2018-01-01
Abstract The Saccharomyces Genome Database (SGD) provides comprehensive, integrated biological information for the budding yeast Saccharomyces cerevisiae, along with search and analysis tools to explore these data, enabling the discovery of functional relationships between sequence and gene products in fungi and higher organisms. We have recently expanded our data model for regulation curation to address regulation at the protein level in addition to transcription, and are presenting the expanded data on the ‘Regulation’ pages at SGD. These pages include a summary describing the context under which the regulator acts, manually curated and high-throughput annotations showing the regulatory relationships for that gene and a graphical visualization of its regulatory network and connected networks. For genes whose products regulate other genes or proteins, the Regulation page includes Gene Ontology enrichment analysis of the biological processes in which those targets participate. For DNA-binding transcription factors, we also provide other information relevant to their regulatory function, such as DNA binding site motifs and protein domains. As with other data types at SGD, all regulatory relationships and accompanying data are available through YeastMine, SGD’s data warehouse based on InterMine. Database URL: http://www.yeastgenome.org PMID:29688362
NASA Astrophysics Data System (ADS)
Pignol, C.; Arnaud, F.; Godinho, E.; Galabertier, B.; Caillo, A.; Billy, I.; Augustin, L.; Calzas, M.; Rousseau, D. D.; Crosta, X.
2016-12-01
Managing scientific data is probably one the most challenging issues in modern science. In plaeosciences the question is made even more sensitive with the need of preserving and managing high value fragile geological samples: cores. Large international scientific programs, such as IODP or ICDP led intense effort to solve this problem and proposed detailed high standard work- and dataflows thorough core handling and curating. However many paleoscience results derived from small-scale research programs in which data and sample management is too often managed only locally - when it is… In this paper we present a national effort leads in France to develop an integrated system to curate ice and sediment cores. Under the umbrella of the national excellence equipment program CLIMCOR, we launched a reflexion about core curating and the management of associated fieldwork data. Our aim was then to conserve all data from fieldwork in an integrated cyber-environment which will evolve toward laboratory-acquired data storage in a near future. To do so, our demarche was conducted through an intimate relationship with field operators as well laboratory core curators in order to propose user-oriented solutions. The national core curating initiative proposes a single web portal in which all teams can store their fieldwork data. This portal is used as a national hub to attribute IGSNs. For legacy samples, this requires the establishment of a dedicated core list with associated metadata. However, for forthcoming core data, we developed a mobile application to capture technical and scientific data directly on the field. This application is linked with a unique coring-tools library and is adapted to most coring devices (gravity, drilling, percussion etc.) including multiple sections and holes coring operations. Those field data can be uploaded automatically to the national portal, but also referenced through international standards (IGSN and INSPIRE) and displayed in international portals (currently, NOAA's IMLGS). In this paper, we present the architecture of the integrated system, future perspectives and the approach we adopted to reach our goals. We will also present our mobile application through didactic examples.
NASA Astrophysics Data System (ADS)
Ramdeen, S.; Hangsterfer, A.; Stanley, V. L.
2017-12-01
There is growing enthusiasm for curation of physical samples in the Earth Science community (see sessions at AGU, GSA, ESIP). Multiple federally funded efforts aim to develop best practices for curation of physical samples; however, these efforts have not yet been consolidated. Harmonizing these concurrent efforts would enable the community as a whole to build the necessary tools and community standards to move forward together. Preliminary research indicate the various groups focused on this topic are working in isolation, and the development of standards needs to come from the broadest view of `community'. We will investigate the gaps between communities by collecting information about preservation policies and practices from curators, who can provide a diverse cross-section of the grand challenges to the overall community. We will look at existing reports and study results to identify example cases, then develop a survey to gather large scale data to reinforce or clarify the example cases. We will be targeting the various community groups which are working on similar issues, and use the survey to improve the visibility of developed best practices. Given that preservation and digital collection management for physical samples are both important and difficult at present (GMRWG, 2015; NRC, 2002), barriers to both need to be addressed in order to achieve open science goals for the entire community. To address these challenges, EarthCube's iSamples, a research coordination network established to advance discoverability, access, and curation of physical samples using cyberinfrastructure, has formed a working group to collect use cases to examine the breadth of earth scientists' work with physical samples. This research team includes curators of state survey and oceanographic geological collections, and a researcher from information science. In our presentation, we will share our research and the design of the proposed survey. Our goal is to engage the audience in a discussion on next steps towards building this community. References: The Geologic Materials Repository Working Group, 2015, USGS Circular 1410 National Research Council. 2002. Geoscience Data and Collections: National Resources in Peril.
Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G; Flicek, Paul; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E
2017-01-04
The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Chen, I-Min A; Markowitz, Victor M; Palaniappan, Krishna; Szeto, Ernest; Chu, Ken; Huang, Jinghua; Ratner, Anna; Pillay, Manoj; Hadjithomas, Michalis; Huntemann, Marcel; Mikhailova, Natalia; Ovchinnikova, Galina; Ivanova, Natalia N; Kyrpides, Nikos C
2016-04-26
The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existing IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.
Caspi, Ron; Altman, Tomer; Dale, Joseph M.; Dreher, Kate; Fulcher, Carol A.; Gilham, Fred; Kaipa, Pallavi; Karthikeyan, Athikkattuvalasu S.; Kothari, Anamika; Krummenacker, Markus; Latendresse, Mario; Mueller, Lukas A.; Paley, Suzanne; Popescu, Liviu; Pujar, Anuradha; Shearer, Alexander G.; Zhang, Peifen; Karp, Peter D.
2010-01-01
The MetaCyc database (MetaCyc.org) is a comprehensive and freely accessible resource for metabolic pathways and enzymes from all domains of life. The pathways in MetaCyc are experimentally determined, small-molecule metabolic pathways and are curated from the primary scientific literature. With more than 1400 pathways, MetaCyc is the largest collection of metabolic pathways currently available. Pathways reactions are linked to one or more well-characterized enzymes, and both pathways and enzymes are annotated with reviews, evidence codes, and literature citations. BioCyc (BioCyc.org) is a collection of more than 500 organism-specific Pathway/Genome Databases (PGDBs). Each BioCyc PGDB contains the full genome and predicted metabolic network of one organism. The network, which is predicted by the Pathway Tools software using MetaCyc as a reference, consists of metabolites, enzymes, reactions and metabolic pathways. BioCyc PGDBs also contain additional features, such as predicted operons, transport systems, and pathway hole-fillers. The BioCyc Web site offers several tools for the analysis of the PGDBs, including Omics Viewers that enable visualization of omics datasets on two different genome-scale diagrams and tools for comparative analysis. The BioCyc PGDBs generated by SRI are offered for adoption by any party interested in curation of metabolic, regulatory, and genome-related information about an organism. PMID:19850718
Geib, Scott M; Hall, Brian; Derego, Theodore; Bremer, Forest T; Cannoles, Kyle; Sim, Sheina B
2018-04-01
One of the most overlooked, yet critical, components of a whole genome sequencing (WGS) project is the submission and curation of the data to a genomic repository, most commonly the National Center for Biotechnology Information (NCBI). While large genome centers or genome groups have developed software tools for post-annotation assembly filtering, annotation, and conversion into the NCBI's annotation table format, these tools typically require back-end setup and connection to an Structured Query Language (SQL) database and/or some knowledge of programming (Perl, Python) to implement. With WGS becoming commonplace, genome sequencing projects are moving away from the genome centers and into the ecology or biology lab, where fewer resources are present to support the process of genome assembly curation. To fill this gap, we developed software to assess, filter, and transfer annotation and convert a draft genome assembly and annotation set into the NCBI annotation table (.tbl) format, facilitating submission to the NCBI Genome Assembly database. This software has no dependencies, is compatible across platforms, and utilizes a simple command to perform a variety of simple and complex post-analysis, pre-NCBI submission WGS project tasks. The Genome Annotation Generator is a consistent and user-friendly bioinformatics tool that can be used to generate a .tbl file that is consistent with the NCBI submission pipeline. The Genome Annotation Generator achieves the goal of providing a publicly available tool that will facilitate the submission of annotated genome assemblies to the NCBI. It is useful for any individual researcher or research group that wishes to submit a genome assembly of their study system to the NCBI.
Hall, Brian; Derego, Theodore; Bremer, Forest T; Cannoles, Kyle
2018-01-01
Abstract Background One of the most overlooked, yet critical, components of a whole genome sequencing (WGS) project is the submission and curation of the data to a genomic repository, most commonly the National Center for Biotechnology Information (NCBI). While large genome centers or genome groups have developed software tools for post-annotation assembly filtering, annotation, and conversion into the NCBI’s annotation table format, these tools typically require back-end setup and connection to an Structured Query Language (SQL) database and/or some knowledge of programming (Perl, Python) to implement. With WGS becoming commonplace, genome sequencing projects are moving away from the genome centers and into the ecology or biology lab, where fewer resources are present to support the process of genome assembly curation. To fill this gap, we developed software to assess, filter, and transfer annotation and convert a draft genome assembly and annotation set into the NCBI annotation table (.tbl) format, facilitating submission to the NCBI Genome Assembly database. This software has no dependencies, is compatible across platforms, and utilizes a simple command to perform a variety of simple and complex post-analysis, pre-NCBI submission WGS project tasks. Findings The Genome Annotation Generator is a consistent and user-friendly bioinformatics tool that can be used to generate a .tbl file that is consistent with the NCBI submission pipeline Conclusions The Genome Annotation Generator achieves the goal of providing a publicly available tool that will facilitate the submission of annotated genome assemblies to the NCBI. It is useful for any individual researcher or research group that wishes to submit a genome assembly of their study system to the NCBI. PMID:29635297
Patient registries: useful tools for clinical research in myasthenia gravis.
Baggi, Fulvio; Mantegazza, Renato; Antozzi, Carlo; Sanders, Donald
2012-12-01
Clinical registries may facilitate research on myasthenia gravis (MG) in several ways: as a source of demographic, clinical, biological, and immunological data on large numbers of patients with this rare disease; as a source of referrals for clinical trials; and by allowing rapid identification of MG patients with specific features. Physician-derived registries have the added advantage of incorporating diagnostic and treatment data that may allow comparison of outcomes from different therapeutic approaches, which can be supplemented with patient self-reported data. We report the demographic analysis of MG patients in two large physician-derived registries, the Duke MG Patient Registry, at the Duke University Medical Center, and the INNCB MG Registry, at the Istituto Neurologico Carlo Besta, as a preliminary study to assess the consistency of the two data sets. These registries share a common structure, with an inner core of common data elements (CDE) that facilitate data analysis. The CDEs are concordant with the MG-specific CDEs developed under the National Institute of Neurological Disorders and Stroke Common Data Elements Project. © 2012 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Lawrence, B.; Bennett, V.; Callaghan, S.; Juckes, M. N.; Pepler, S.
2013-12-01
The UK Centre for Environmental Data Archival (CEDA) hosts a number of formal data centres, including the British Atmospheric Data Centre (BADC), and is a partner in a range of national and international data federations, including the InfraStructure for the European Network for Earth system Simulation, the Earth System Grid Federation, and the distributed IPCC Data Distribution Centres. The mission of CEDA is to formally curate data from, and facilitate the doing of, environmental science. The twin aims are symbiotic: data curation helps facilitate science, and facilitating science helps with data curation. Here we cover how CEDA delivers this strategy by established internal processes supplemented by short-term projects, supported by staff with a range of roles. We show how CEDA adds value to data in the curated archive, and how it supports science, and show examples of the aforementioned symbiosis. We begin by discussing curation: CEDA has the formal responsibility for curating the data products of atmospheric science and earth observation research funded by the UK Natural Environment Research Council (NERC). However, curation is not just about the provider community, the consumer communities matter too, and the consumers of these data cross the boundaries of science, including engineers, medics, as well as the gamut of the environmental sciences. There is a small, and growing cohort of non-science users. For both producers and consumers of data, information about data is crucial, and a range of CEDA staff have long worked on tools and techniques for creating, managing, and delivering metadata (as well as data). CEDA "science support" staff work with scientists to help them prepare and document data for curation. As one of a spectrum of activities, CEDA has worked on data Publication as a method of both adding value to some data, and rewarding the effort put into the production of quality datasets. As such, we see this activity as both a curation and a facilitation activity. A range of more focused facilitation activities are carried out, from providing a computing platform suitable for big-data analytics (the Joint Analysis System, JASMIN), to working on distributed data analysis (EXARCH), and the acquisition of third party data to support science and impact (e.g. in the context of the facility for Climate and Environmental Monitoring from Space, CEMS). We conclude by confronting the view of Parsons and Fox (2013) that metaphors such as Data Publication, Big Iron, Science Support etc are limiting, and suggest the CEDA experience is that these sorts of activities can and do co-exist, much as they conclude they should. However, we also believe that within co-existing metaphors, production systems need to be limited in their scope, even if they are on a road to a more joined up infrastructure. We shouldn't confuse what we can do now with what we might want to do in the future.
Changing the Curation Equation: A Data Lifecycle Approach to Lowering Costs and Increasing Value
NASA Astrophysics Data System (ADS)
Myers, J.; Hedstrom, M.; Plale, B. A.; Kumar, P.; McDonald, R.; Kooper, R.; Marini, L.; Kouper, I.; Chandrasekar, K.
2013-12-01
What if everything that researchers know about their data, and everything their applications know, were directly available to curators? What if all the information that data consumers discover and infer about data were also available? What if curation and preservation activities occurred incrementally, during research projects instead of after they end, and could be leveraged to make it easier to manage research data from the moment of its creation? These are questions that the Sustainable Environments - Actionable Data (SEAD) project, funded as part of the National Science Foundation's DataNet partnership, was designed to answer. Data curation is challenging, but it is made more difficult by the historical separation of data production, data use, and formal curation activities across organizations, locations, and applications, and across time. Modern computing and networking technologies allow a much different approach in which data and metadata can easily flow between these activities throughout the data lifecycle, and in which heterogeneous and evolving data and metadata can be managed. Sustainability research, SEAD's initial focus area, is a clear example of an area where the nature of the research (cross-disciplinary, integrating heterogeneous data from independent sources, small teams, rapid evolution of sensing and analysis techniques) and the barriers and costs inherent in traditional methods have limited adoption of existing curation tools and techniques, to the detriment of overall scientific progress. To explore these ideas and create a sustainable curation capability for communities such as sustainability research, the SEAD team has developed and is now deploying an interacting set of open source data services that demonstrate this approach. These services provide end-to-end support for management of data during research projects; publication of that data into long-term archives; and integration of it into community networks of publications, research center activities, and synthesis efforts. They build on a flexible ';semantic content management' architecture and incorporate notions of ';active' and ';social' curation - continuous, incremental curation activities performed by the data producers (active) and the community (social) that are motivated by a range of direct benefits. Examples include the use of metadata (tags) to allow generation of custom geospatial maps, automated metadata extraction to generate rich data pages for known formats, and the use of information about data authorship to allow automatic updates of personal and project research profiles when data is published. In this presentation, we describe the core capabilities of SEAD's services and their application in sustainability research. We also outline the key features of the SEAD architecture - the use of global semantic identifiers, extensible data and metadata models, web services to manage context shifts, scalable cloud storage - and highlight how this approach is particularly well suited to extension by independent third parties. We conclude with thoughts on how this approach can be applied to challenging issues such as exposing ';dark' data and reducing duplicate creation of derived data products, and can provide a new level of analytics for community analysis and coordination.
ERIC Educational Resources Information Center
Sweet, James A.
This paper reviews some of the issues and concerns that have prompted the growth of interest in the demography of the family. It also examines a number of aspects of the family and the household structure of racial and ethnic minorities. The five racial minorities discussed are blacks, Chinese, Japanese, Filipinos and American Indians. The…
ERIC Educational Resources Information Center
Austin, G.; Duerr, M.
2005-01-01
No Child Left Behind (NCLB) mandates that schools receiving federal Safe and Drug-Free Schools and Community (SDFSC) funds conduct an anonymous teacher survey of the incidence of, prevalence of, and attitudes related to drug use and violence. To meet this NCLB mandate, the California Department of Education (CDE) funded the development of the…
ERIC Educational Resources Information Center
Vincent, Jeffrey M.
2016-01-01
This study aims to inform the California Department of Education (CDE) in ensuring the standards contained in Title 5 appropriately promote the planning and design of healthy, safe and educationally suitable K-12 school facilities. The study gathers and analyzes K-12 facility standards in ten case study states across the country to understand…
ERIC Educational Resources Information Center
Chamberlin, James L.
2007-01-01
Over the past five years the Colorado Department of Education (CDE) has used the results of the Colorado Student Assessment Program (CSAP) to rate public school performance on the School Accountability Report (SAR). The public often considers the school ratings as indicative of the school's quality. There appears to be a lack of quantitative…
Household Composition Change and Economic Welfare Inequality: 1960 to 1980. CDE Working Paper 88-18.
ERIC Educational Resources Information Center
Wojtkiewicz, Roger A.
This paper considers the following dimensions of the change in household composition of the United States population between 1960 and 1980: (1) a decrease in the number of households headed by a couple; and (2) a decrease in the number of children per household. Examination of census figures from 1960, 1970, and 1980 reveals that blacks and whites…
1989-03-22
to remove the armour which straitjacketed collaboration, under - standing and detente on this continent. JPRS-TAC-89-012 22 March 1989 31 EAST...USSR-Polish-GDR Military Exercise Announced Under CDE Accord [East Berlin ADN 8 Mar] 2 BULGARIA Foreign Minister Mladenov Speaks at CFE/CSBM...Warsaw TV 3 Mar] 23 Two Tank Regiments Dissolved, Ceremonies Held [PAP 4 Mar] 23 General Staff Academy To Undergo Restructuring Under New Defensive
ERIC Educational Resources Information Center
Vincent, Jeffrey M.
2016-01-01
The purpose of this study was to inform the California Department of Education (CDE) in ensuring the standards contained in Title 5 appropriately promote the planning and design of healthy, safe and educationally suitable K-12 school facilities. The study gathers and analyzes K-12 facility standards in other states across the country to understand…
Ruddell, Richard G; Knight, Belinda; Tirnitz-Parker, Janina E E; Akhurst, Barbara; Summerville, Lesa; Subramaniam, V Nathan; Olynyk, John K; Ramm, Grant A
2009-01-01
Lymphotoxin-beta (LTbeta) is a proinflammatory cytokine and a member of the tumor necrosis factor (TNF) superfamily known for its role in mediating lymph node development and homeostasis. Our recent studies suggest a role for LTbeta in mediating the pathogenesis of human chronic liver disease. We hypothesize that LTbeta co-ordinates the wound healing response in liver injury via direct effects on hepatic stellate cells. This study used the choline-deficient, ethionine-supplemented (CDE) dietary model of chronic liver injury, which induces inflammation, liver progenitor cell proliferation, and portal fibrosis, to assess (1) the cellular expression of LTbeta, and (2) the role of LTbeta receptor (LTbetaR) in mediating wound healing, in LTbetaR(-/-) versus wild-type mice. In addition, primary isolates of hepatic stellate cells were treated with LTbetaR-ligands LTbeta and LTbeta-related inducible ligand competing for glycoprotein D binding to herpesvirus entry mediator on T cells (LIGHT), and mediators of hepatic stellate cell function and fibrogenesis were assessed. LTbeta was localized to progenitor cells immediately adjacent to activated hepatic stellate cells in the periportal region of the liver in wild-type mice fed the CDE diet. LTbetaR(-/-) mice fed the CDE diet showed significantly reduced fibrosis and a dysregulated immune response. LTbetaR was demonstrated on isolated hepatic stellate cells, which when stimulated by LTbeta and LIGHT, activated the nuclear factor kappa B (NF-kappaB) signaling pathway. Neither LTbeta nor LIGHT had any effect on alpha-smooth muscle actin, tissue inhibitor of metalloproteinase 1, transforming growth factor beta, or procollagen alpha(1)(I) expression; however, leukocyte recruitment-associated factors intercellular adhesion molecule 1 and regulated upon activation T cells expressed and secreted (RANTES) were markedly up-regulated. RANTES caused the chemotaxis of a liver progenitor cell line expressing CCR5. This study suggests that LTbetaR on hepatic stellate cells may be involved in paracrine signaling with nearby LTbeta-expressing liver progenitor cells mediating recruitment of progenitor cells, hepatic stellate cells, and leukocytes required for wound healing and regeneration during chronic liver injury.
BioCreative III interactive task: an overview
2011-01-01
Background The BioCreative challenge evaluation is a community-wide effort for evaluating text mining and information extraction systems applied to the biological domain. The biocurator community, as an active user of biomedical literature, provides a diverse and engaged end user group for text mining tools. Earlier BioCreative challenges involved many text mining teams in developing basic capabilities relevant to biological curation, but they did not address the issues of system usage, insertion into the workflow and adoption by curators. Thus in BioCreative III (BC-III), the InterActive Task (IAT) was introduced to address the utility and usability of text mining tools for real-life biocuration tasks. To support the aims of the IAT in BC-III, involvement of both developers and end users was solicited, and the development of a user interface to address the tasks interactively was requested. Results A User Advisory Group (UAG) actively participated in the IAT design and assessment. The task focused on gene normalization (identifying gene mentions in the article and linking these genes to standard database identifiers), gene ranking based on the overall importance of each gene mentioned in the article, and gene-oriented document retrieval (identifying full text papers relevant to a selected gene). Six systems participated and all processed and displayed the same set of articles. The articles were selected based on content known to be problematic for curation, such as ambiguity of gene names, coverage of multiple genes and species, or introduction of a new gene name. Members of the UAG curated three articles for training and assessment purposes, and each member was assigned a system to review. A questionnaire related to the interface usability and task performance (as measured by precision and recall) was answered after systems were used to curate articles. Although the limited number of articles analyzed and users involved in the IAT experiment precluded rigorous quantitative analysis of the results, a qualitative analysis provided valuable insight into some of the problems encountered by users when using the systems. The overall assessment indicates that the system usability features appealed to most users, but the system performance was suboptimal (mainly due to low accuracy in gene normalization). Some of the issues included failure of species identification and gene name ambiguity in the gene normalization task leading to an extensive list of gene identifiers to review, which, in some cases, did not contain the relevant genes. The document retrieval suffered from the same shortfalls. The UAG favored achieving high performance (measured by precision and recall), but strongly recommended the addition of features that facilitate the identification of correct gene and its identifier, such as contextual information to assist in disambiguation. Discussion The IAT was an informative exercise that advanced the dialog between curators and developers and increased the appreciation of challenges faced by each group. A major conclusion was that the intended users should be actively involved in every phase of software development, and this will be strongly encouraged in future tasks. The IAT Task provides the first steps toward the definition of metrics and functional requirements that are necessary for designing a formal evaluation of interactive curation systems in the BioCreative IV challenge. PMID:22151968
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions
Jamieson, Daniel G.; Roberts, Phoebe M.; Robertson, David L.; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie ‘pain’, a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: ••• PMID:23707966
The Principles for Successful Scientific Data Management Revisited
NASA Astrophysics Data System (ADS)
Walker, R. J.; King, T. A.; Joy, S. P.
2005-12-01
It has been 23 years since the National Research Council's Committee on Data Management and Computation (CODMAC) published its famous list of principles for successful scientific data management that have provided the framework for modern space science data management. CODMAC outlined seven principles: 1. Scientific Involvement in all aspects of space science missions. 2. Scientific Oversight of all scientific data-management activities. 3. Data Availability - Validated data should be made available to the scientific community in a timely manner. They should include appropriate ancillary data, and complete documentation. 4. Facilities - A proper balance between cost and scientific productivity should be maintained. 5. Software - Transportable well documented software should be available to process and analyze the data. 6. Scientific Data Storage - The data should be preserved in retrievable form. 7. Data System Funding - Adequate data funding should be made available at the outset of missions and protected from overruns. In this paper we will review the lessons learned in trying to apply these principles to space derived data. The Planetary Data System created the concept of data curation to carry out the CODMAC principles. Data curators are scientists and technologists who work directly with the mission scientists to create data products. The efficient application of the CODMAC principles requires that data curators and the mission team start early in a mission to plan for data access and archiving. To build the data products the planetary discipline adopted data access and documentation standards and has adhered to them. The data curators and mission team work together to produce data products and make them available. However even with early planning and agreement on standards the needs of the science community frequently far exceed the available resources. This is especially true for smaller principal investigator run missions. We will argue that one way to make data systems for small missions more effective is for the data curators to provide software tools to help develop the mission data system.
Won, Young-Woong; Joo, Jungnam; Yun, Tak; Lee, Geon-Kook; Han, Ji-Youn; Kim, Heung Tae; Lee, Jin Soo; Kim, Moon Soo; Lee, Jong Mog; Lee, Hyun-Sung; Zo, Jae Ill; Kim, Sohee
2015-05-01
Development of brain metastasis results in a significant reduction in overall survival. However, there is no an effective tool to predict brain metastasis in non-small cell lung cancer (NSCLC) patients. We conducted this study to develop a feasible nomogram that can predict metastasis to the brain as the first relapse site in patients with curatively resected NSCLC. A retrospective review of NSCLC patients who had received curative surgery at National Cancer Center (Goyang, South Korea) between 2001 and 2008 was performed. We chose metastasis to the brain as the first relapse site after curative surgery as the primary endpoint of the study. A nomogram was modeled using logistic regression. Among 1218 patients, brain metastasis as the first relapse developed in 87 patients (7.14%) during the median follow-up of 43.6 months. Occurrence rates of brain metastasis were higher in patients with adenocarcinoma or those with a high pT and pN stage. Younger age appeared to be associated with brain metastasis, but this result was not statistically significant. The final prediction model included histology, smoking status, pT stage, and the interaction between adenocarcinoma and pN stage. The model showed fairly good discriminatory ability with a C-statistic of 69.3% and 69.8% for predicting brain metastasis within 2 years and 5 years, respectively. Internal validation using 2000 bootstrap samples resulted in C-statistics of 67.0% and 67.4% which still indicated good discriminatory performances. The nomogram presented here provides the individual risk estimate of developing metastasis to the brain as the first relapse site in patients with NSCLC who have undergone curative surgery. Surveillance programs or preventive treatment strategies for brain metastasis could be established based on this nomogram. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions.
Jamieson, Daniel G; Roberts, Phoebe M; Robertson, David L; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie 'pain', a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: •••
[Comparative analysis of child development screening tools designed and validated in Mexico].
Orcajo-Castelán, Rodrigo; Sidonio-Aguayo, Beatriz; Alcacio-Mendoza, Jorge Augusto; López-Díaz, Giovana Lucía
In recent years a number of child development screening tools have been developed in Mexico; however, their properties have not been compared. The objective of this review was to compare the report quality and risk bias of the screening tools developed and validated in Mexico in their published versions. A search was conducted in databases, gray literature and cross references. The resultant tests were compared and analyzed using STARD, QUADAS and QUADAS-2 criteria. "Valoración Neuroconductual del Desarrollo del Lactante" (VANEDELA), "Evaluación del Desarrollo Infantil or EDI" (CDE in English), "Prueba de Tamiz del Neurodesarrollo infantil" (PTNI), "Cartillas de Vigilancia para identificar alteraciones en el Desarrollo del Lactante" (CVDL) and "Indicadores de riesgo del Perfil de Conductas de Desarrollo" (INDIPCD-R) were included for the comparison. No test fulfilled all STARD items. The most complete in their methodological description were VANEDELA and EDI. The areas lacking more data on the reports were recruiting and patient selection (VANEDELA, PTNI, CVDL, INDIPCD-R). In QUADAS evaluation, all had some risk bias, but some serious concerns of risk bias were raised by patient sampling and by the choice of gold standard in two tests (PTNI, INDIPCD-R). Child development screening tests created and validated in Mexico have variable report quality and risk bias. The test with the best validation report quality is VANEDELA and the one with the lowest risk of bias is EDI. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4
2012-01-01
Background Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. Objective We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. Implementation In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. Results The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. Conclusions The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers. PMID:23046606
Advanced Curation Preparation for Mars Sample Return and Cold Curation
NASA Technical Reports Server (NTRS)
Fries, M. D.; Harrington, A. D.; McCubbin, F. M.; Mitchell, J.; Regberg, A. B.; Snead, C.
2017-01-01
NASA Curation is tasked with the care and distribution of NASA's sample collections, such as the Apollo lunar samples and cometary material collected by the Stardust spacecraft. Curation is also mandated to perform Advanced Curation research and development, which includes improving the curation of existing collections as well as preparing for future sample return missions. Advanced Curation has identified a suite of technologies and techniques that will require attention ahead of Mars sample return (MSR) and missions with cold curation (CCur) requirements, perhaps including comet sample return missions.
[Management of pre-analytical nonconformities].
Berkane, Z; Dhondt, J L; Drouillard, I; Flourié, F; Giannoli, J M; Houlbert, C; Surgat, P; Szymanowicz, A
2010-12-01
The main nonconformities enumerated to facilitate consensual codification. In each case, an action is defined: refusal to realize the examination with request of a new sample, request of information or correction, results cancellation, nurse or physician information. A traceability of the curative, corrective and preventive actions is needed. Then, methodology and indicators are proposed to assess nonconformity and to follow the quality improvements. The laboratory information system can be used instead of dedicated software. Tools for the follow-up of nonconformities scores are proposed. Finally, we propose an organization and some tools allowing the management and control of the nonconformities occurring during the pre-examination phase.
Jung, Da Hyun; Lee, Yong Chan; Kim, Jie-Hyun; Lee, Sang Kil; Shin, Sung Kwan; Park, Jun Chul; Chung, Hyunsoo; Park, Jae Jun; Youn, Young Hoon; Park, Hyojin
2017-03-01
Endoscopic resection (ER) is accepted as a curative treatment option for selected cases of early gastric cancer (EGC). Although additional surgery is often recommended for patients who have undergone non-curative ER, clinicians are cautious when managing elderly patients with GC because of comorbid conditions. The aim of the study was to investigate clinical outcomes in elderly patients following non-curative ER with and without additive treatment. Subjects included 365 patients (>75 years old) who were diagnosed with EGC and underwent ER between 2007 and 2015. Clinical outcomes of three patient groups [curative ER (n = 246), non-curative ER with additive treatment (n = 37), non-curative ER without additive treatment (n = 82)] were compared. Among the patients who underwent non-curative ER with additive treatment, 28 received surgery, three received a repeat ER, and six experienced argon plasma coagulation. Patients who underwent non-curative ER alone were significantly older than those who underwent additive treatment. Overall 5-year survival rates in the curative ER, non-curative ER with treatment, and non-curative ER without treatment groups were 84, 86, and 69 %, respectively. No significant difference in overall survival was found between patients in the curative ER and non-curative ER with additive treatment groups. The non-curative ER groups were categorized by lymph node metastasis risk factors to create a high-risk group that exhibited positive lymphovascular invasion or deep submucosal invasion greater than SM2 and a low-risk group without risk factors. Overall 5-year survival rate was lowest (60 %) in the high-risk group with non-curative ER and no additive treatment. Elderly patients who underwent non-curative ER with additive treatment showed better survival outcome than those without treatment. Therefore, especially with LVI or deep submucosal invasion, additive treatment is recommended in patients undergoing non-curative ER, even if they are older than 75 years.
Curating NASA's Past, Present, and Future Extraterrestrial Sample Collections
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Allton, J. H.; Evans, C. A.; Fries, M. D.; Nakamura-Messenger, K.; Righter, K.; Zeigler, R. A.; Zolensky, M.; Stansbery, E. K.
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "...curation of all extra-terrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "...documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the past, present, and future activities of the NASA Curation Office.
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
ERIC Educational Resources Information Center
Pearce, Nick; Learmonth, Sarah
2013-01-01
This paper details a case study of using Pinterest as an educational resource in an introductory anthropology course. Its use was evaluated through the data provided by the platform itself and focus groups. This evaluation found that Pinterest was a popular and useful tool for developing curated multimedia resources to support students' learning.…
1990-07-30
disposable syringes and needles which the Bul- garian government will set up in the country in conjunc- tion with Medical Stores. Speaking in his...Kam Limited—Plovdiv, a medical stores company in Bulgaria and Cde Vladimir Zlatrov, chairman of the Electroimpex that setting up of the project...discussing the problem of direct supplies that would eliminate middlemen.— ZANA . CZECHOSLOVAKIA CPCZ’s Election Program 90CH0095A Prague RUDE PRA
JPRS Report Africa (Sub-Sahara)
1987-09-16
farmers, civilians, South African Police and medical services (see diagram). The system represents a breakthrough in communications for the province as...In an interview with Zana after he toured a number of remote areas in the district, Cde Musoko- twane said the Party and its Government was...trucks with 3,400 bags of mealie meal had arrived from National Milling Company. — Zana . /13046 CSO: 3400/154 112 REACTION TO RURAL
Management of Biomedical Waste: An Exploratory Study.
Abhishek, K N; Suryavanshi, Harshal N; Sam, George; Chaithanya, K H; Punde, Prashant; Singh, S Swetha
2015-09-01
Dental operatories pose a threat due to the high chances of infection transmission both to the clinician and the patients. Hence, management of dental waste becomes utmost importance not only for the health benefit of the dentist himself, but also people who can come into contact with these wastes directly or indirectly. The present study was conducted to find out the management of biomedical waste in private dental practice among 3 districts of Karnataka. The study population included 186 private practitioners in 3 districts of Karnataka (Coorg, Mysore, Hassan), South India. A pre-tested self-administered questionnaire was distributed to assess the knowledge and practices regarding dental waste management. Descriptive statistics was used to summarize the results. Out of 186 study subjects, 71 (38%) were females and 115 (62%) were males. The maximum number of participants belonged to the age group of 28-33 years (29%). Undergraduate qualification was more (70%). 90 (48%) participants had an experience of 0-5 years. Chi-square analysis showed a highly significant association between participant who attended continuing dental education (CDE) program and their practice of dental waste management. Education with regards to waste management will help in enhancing practices regarding the same. In order to fill this vacuum CDE programs have to be conducted in pursuance to maintain health of the community.
Xiong, Liping; Lan, Ganhui
2015-01-01
Sustained molecular oscillations are ubiquitous in biology. The obtained oscillatory patterns provide vital functions as timekeepers, pacemakers and spacemarkers. Models based on control theory have been introduced to explain how specific oscillatory behaviors stem from protein interaction feedbacks, whereas the energy dissipation through the oscillating processes and its role in the regulatory function remain unexplored. Here we developed a general framework to assess an oscillator’s regulation performance at different dissipation levels. Using the Escherichia coli MinCDE oscillator as a model system, we showed that a sufficient amount of energy dissipation is needed to switch on the oscillation, which is tightly coupled to the system’s regulatory performance. Once the dissipation level is beyond this threshold, unlike stationary regulators’ monotonic performance-to-cost relation, excess dissipation at certain steps in the oscillating process damages the oscillator’s regulatory performance. We further discovered that the chemical free energy from ATP hydrolysis has to be strategically assigned to the MinE-aided MinD release and the MinD immobilization steps for optimal performance, and a higher energy budget improves the robustness of the oscillator. These results unfold a novel mode by which living systems trade energy for regulatory function. PMID:26317492
Information-Seeking Behaviors of Dental Practitioners in Three Practice-Based Research Networks
Botello-Harbaum, Maria T.; Demko, Catherine A.; Curro, Frederick A.; Rindal, D. Brad; Collie, Damon; Gilbert, Gregg H.; Hilton, Thomas J.; Craig, Ronald G.; Wu, Juliann; Funkhouser, Ellen; Lehman, Maryann; McBride, Ruth; Thompson, Van; Lindblad, Anne
2013-01-01
Research on the information-seeking behaviors of dental practitioners is scarce. Knowledge of dentists’ information-seeking behaviors should advance the translational gap between clinical dental research and dental practice. A cross-sectional survey was conducted to examine the self-reported information-seeking behaviors of dentists in three dental practice-based research networks (PBRNs). A total of 950 dentists (65 percent response rate) completed the survey. Dental journals and continuing dental education (CDE) sources used and their influence on practice guidance were assessed. PBRN participation level and years since dental degree were measured. Full-participant dentists reported reading the Journal of the American Dental Association and General Dentistry more frequently than did their reference counterparts. Printed journals were preferred by most dentists. A lower proportion of full participants obtained their CDE credits at dental meetings compared to partial participants. Experienced dentists read other dental information sources more frequently than did less experienced dentists. Practitioners involved in a PBRN differed in their approaches to accessing information sources. Peer-reviewed sources were more frequently used by full participants and dentists with fifteen years of experience or more. Dental PBRNs potentially play a significant role in the dissemination of evidence-based information. This study found that specific educational sources might increase and disseminate knowledge among dentists. PMID:23382524
NASA Astrophysics Data System (ADS)
Hartmann, J. M.; Veillerot, M.; Prévitali, B.
2017-10-01
We have compared co-flow and cyclic deposition/etch processes for the selective epitaxial growth of Si:P layers. High growth rates, relatively low resistivities and significant amounts of tensile strain (up to 10 nm min-1, 0.55 mOhm cm and a strain equivalent to 1.06% of substitutional C in Si:C layers) were obtained at 700 °C, 760 Torr with a co-flow approach and a SiH2Cl2 + PH3 + HCl chemistry. This approach was successfully used to thicken the sources and drains regions of n-type fin-shaped Field Effect Transistors. Meanwhile, the (Si2H6 + PH3/HCl + GeH4) CDE process evaluated yielded at 600 °C, 80 Torr even lower resistivities (0.4 mOhm cm, typically), at the cost however of the tensile strain which was lost due to (i) the incorporation of Ge atoms (1.5%, typically) into the lattice during the selective etch steps and (ii) a reduction by a factor of two of the P atomic concentration in CDE layers compared to that in layers grown in a single step (5 × 1020 cm-3 compared to 1021 cm-3).
Heyob, Katie M; Blotevogel, Jens; Brooker, Michael; Evans, Morgan V; Lenhart, John J; Wright, Justin; Lamendella, Regina; Borch, Thomas; Mouser, Paula J
2017-12-05
Hydraulic fracturing fluids are injected into shales to extend fracture networks that enhance oil and natural gas production from unconventional reservoirs. Here we evaluated the biodegradability of three widely used nonionic polyglycol ether surfactants (alkyl ethoxylates (AEOs), nonylphenol ethoxylates (NPEOs), and polypropylene glycols (PPGs)) that function as weatherizers, emulsifiers, wetting agents, and corrosion inhibitors in injected fluids. Under anaerobic conditions, we observed complete removal of AEOs and NPEOs from solution within 3 weeks regardless of whether surfactants were part of a chemical mixture or amended as individual additives. Microbial enzymatic chain shortening was responsible for a shift in ethoxymer molecular weight distributions and the accumulation of the metabolite acetate. PPGs bioattenuated the slowest, producing sizable concentrations of acetone, an isomer of propionaldehyde. Surfactant chain shortening was coupled to an increased abundance of the diol dehydratase gene cluster (pduCDE) in Firmicutes metagenomes predicted from the 16S rRNA gene. The pduCDE enzymes are responsible for cleaving ethoxylate chain units into aldehydes before their fermentation into alcohols and carboxylic acids. These data provide new mechanistic insight into the environmental fate of hydraulic fracturing surfactants after accidental release through chain shortening and biotransformation, emphasizing the importance of compound structure disclosure for predicting biodegradation products.
PDS4 - Some Principles for Agile Data Curation
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.
2015-12-01
PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.
CCDB: a curated database of genes involved in cervix cancer.
Agarwal, Subhash M; Raghav, Dhwani; Singh, Harinder; Raghava, G P S
2011-01-01
The Cervical Cancer gene DataBase (CCDB, http://crdd.osdd.net/raghava/ccdb) is a manually curated catalog of experimentally validated genes that are thought, or are known to be involved in the different stages of cervical carcinogenesis. In spite of the large women population that is presently affected from this malignancy still at present, no database exists that catalogs information on genes associated with cervical cancer. Therefore, we have compiled 537 genes in CCDB that are linked with cervical cancer causation processes such as methylation, gene amplification, mutation, polymorphism and change in expression level, as evident from published literature. Each record contains details related to gene like architecture (exon-intron structure), location, function, sequences (mRNA/CDS/protein), ontology, interacting partners, homology to other eukaryotic genomes, structure and links to other public databases, thus augmenting CCDB with external data. Also, manually curated literature references have been provided to support the inclusion of the gene in the database and establish its association with cervix cancer. In addition, CCDB provides information on microRNA altered in cervical cancer as well as search facility for querying, several browse options and an online tool for sequence similarity search, thereby providing researchers with easy access to the latest information on genes involved in cervix cancer.
ezTag: tagging biomedical concepts via interactive learning.
Kwon, Dongseop; Kim, Sun; Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong
2018-05-18
Recently, advanced text-mining techniques have been shown to speed up manual data curation by providing human annotators with automated pre-annotations generated by rules or machine learning models. Due to the limited training data available, however, current annotation systems primarily focus only on common concept types such as genes or diseases. To support annotating a wide variety of biological concepts with or without pre-existing training data, we developed ezTag, a web-based annotation tool that allows curators to perform annotation and provide training data with humans in the loop. ezTag supports both abstracts in PubMed and full-text articles in PubMed Central. It also provides lexicon-based concept tagging as well as the state-of-the-art pre-trained taggers such as TaggerOne, GNormPlus and tmVar. ezTag is freely available at http://eztag.bioqrator.org.
Recommendations resulting from the SPDS Community-Wide Workshop
NASA Technical Reports Server (NTRS)
1993-01-01
The Data Systems Panel identified three critical functionalities of a Space Physics Data System (SPDS): the delivery of self-documenting data, the existence of a matrix of translators between various standard formats (IDFS, CDF, netCDF, HDF, TENNIS, UCLA flat file, and FITS), and a network-based capability for browsing and examining inventory records for the system's data holdings. The recommendations resulting from the workshop include the philosophy, funding, and objectives of a SPDS. Access to quality data is seen as the most important objective by the Policy Panel, with curation and information about the data being integral parts of any accessible data set. The Data Issues Panel concluded that the SPDS can supply encouragement, guidelines, and ultimately provide a mechanism for financial support for data archiving, restoration, and curation. The Software Panel of the SPDS focused on defining the requirements and priorities for SPDS to support common data analysis and data visualization tools and packages.
Chen, I-Min A.; Markowitz, Victor M.; Palaniappan, Krishna; ...
2016-04-26
Background: The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Results: Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existingmore » IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. Conclusion: By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, I-Min A.; Markowitz, Victor M.; Palaniappan, Krishna
Background: The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Results: Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existingmore » IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. Conclusion: By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.« less
Annotation, submission and screening of repetitive elements in Repbase: RepbaseSubmitter and Censor.
Kohany, Oleksiy; Gentles, Andrew J; Hankus, Lukasz; Jurka, Jerzy
2006-10-25
Repbase is a reference database of eukaryotic repetitive DNA, which includes prototypic sequences of repeats and basic information described in annotations. Updating and maintenance of the database requires specialized tools, which we have created and made available for use with Repbase, and which may be useful as a template for other curated databases. We describe the software tools RepbaseSubmitter and Censor, which are designed to facilitate updating and screening the content of Repbase. RepbaseSubmitter is a java-based interface for formatting and annotating Repbase entries. It eliminates many common formatting errors, and automates actions such as calculation of sequence lengths and composition, thus facilitating curation of Repbase sequences. In addition, it has several features for predicting protein coding regions in sequences; searching and including Pubmed references in Repbase entries; and searching the NCBI taxonomy database for correct inclusion of species information and taxonomic position. Censor is a tool to rapidly identify repetitive elements by comparison to known repeats. It uses WU-BLAST for speed and sensitivity, and can conduct DNA-DNA, DNA-protein, or translated DNA-translated DNA searches of genomic sequence. Defragmented output includes a map of repeats present in the query sequence, with the options to report masked query sequence(s), repeat sequences found in the query, and alignments. Censor and RepbaseSubmitter are available as both web-based services and downloadable versions. They can be found at http://www.girinst.org/repbase/submission.html (RepbaseSubmitter) and http://www.girinst.org/censor/index.php (Censor).
Curating NASA's Extraterrestrial Samples - Past, Present, and Future
NASA Technical Reports Server (NTRS)
Allen, Carlton; Allton, Judith; Lofgren, Gary; Righter, Kevin; Zolensky, Michael
2011-01-01
Curation of extraterrestrial samples is the critical interface between sample return missions and the international research community. The Astromaterials Acquisition and Curation Office at the NASA Johnson Space Center (JSC) is responsible for curating NASA s extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with ". . . curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "documentation, preservation, preparation, and distribution of samples for research, education, and public outreach."
Curating NASA's Extraterrestrial Samples - Past, Present, and Future
NASA Technical Reports Server (NTRS)
Allen, Carlton; Allton, Judith; Lofgren, Gary; Righter, Kevin; Zolensky, Michael
2010-01-01
Curation of extraterrestrial samples is the critical interface between sample return missions and the international research community. The Astromaterials Acquisition and Curation Office at the NASA Johnson Space Center (JSC) is responsible for curating NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials," JSC is charged with ". . . curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for research, education, and public outreach.
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
NASA Technical Reports Server (NTRS)
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Stoner, Marie Cd; Edwards, Jessie K; Miller, William C; Aiello, Allison E; Halpern, Carolyn T; Julien, Aimée; Rucinski, Katherine B; Selin, Amanda; Twine, Rhian; Hughes, James P; Wang, Jing; Agyei, Yaw; Gómez-Olivé, F Xavier; Wagner, Ryan G; Laeyendecker, Oliver; Macphail, Catherine; Kahn, Kathleen; Pettifor, Audrey
2018-05-22
Similar prior publications by the first author using the same data source include: Stoner M.C.D, Edwards J, Miller W, Aiello A, Halpern C, Selin A, Hughes J, Wang J, Laeyendecker O, Agyei Y, McPhail C, Kahn K, Pettifor A.(2017) The effect of schooling on incident HIV and HSV-2 infection in young South African women enrolled in HPTN 068. AIDS. 24;31(15):2127-213. PMC5599334. Stoner M.C.D, Edwards J, Miller W, Aiello A, Halpern C, Julien Suarez, Selin A, Hughes J, Wang J, McPhail C, Kahn K, Pettifor A. (2017) The effect of schooling on age-disparate relationships and number of sexual partners among young women in rural South Africa enrolled in HPTN 068. J Acquir Immune Defic Syndr. 76 (5):e107-e114. PMC56801112This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially without permission from the journal.Abstract OBJECTIVE:: School attendance prevents HIV and HSV-2 in adolescent girls and young women (AGYW) but the mechanisms to explain this relationship remain unclear. Our study assesses the extent to which characteristics of sex partners, partner age and number, mediate the relationship between attendance and risk of infection in AGYW in South Africa. We use longitudinal data from the HPTN 068 randomized controlled trial in rural South Africa where girls were enrolled in early adolescence and followed in the main trial for over three years. We examined older partners and number of partners as possible mediators. We use the parametric g-formula to estimate 4-year risk differences for the effect of school attendance on cumulative incidence of HIV/HSV-2 overall and the controlled direct effect (CDE) for mediation. We examined mediation separately and jointly for the mediators of interest. We found that young women with high attendance in school had a lower cumulative incidence of HIV compared to those with low attendance (risk difference=-1.6%). Partner age difference (CDE=-1.2%) and number of partners (CDE=-0.4%) mediated a large portion of this effect. In fact, when we accounted for the mediators jointly, the effect of schooling on HIV was almost removed showing full mediation (CDE= -0.3%). The same patterns were observed for the relationship between school attendance and cumulative incidence of HSV-2 infection. Increasing school attendance reduces risk of acquiring HIV and HSV-2. Our results indicate the importance of school attendance in reducing partner number and partner age difference in this relationship.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially without permission from the journal.
Cole, Charles; Krampis, Konstantinos; Karagiannis, Konstantinos; Almeida, Jonas S; Faison, William J; Motwani, Mona; Wan, Quan; Golikov, Anton; Pan, Yang; Simonyan, Vahan; Mazumder, Raja
2014-01-27
Next-generation sequencing (NGS) technologies have resulted in petabytes of scattered data, decentralized in archives, databases and sometimes in isolated hard-disks which are inaccessible for browsing and analysis. It is expected that curated secondary databases will help organize some of this Big Data thereby allowing users better navigate, search and compute on it. To address the above challenge, we have implemented a NGS biocuration workflow and are analyzing short read sequences and associated metadata from cancer patients to better understand the human variome. Curation of variation and other related information from control (normal tissue) and case (tumor) samples will provide comprehensive background information that can be used in genomic medicine research and application studies. Our approach includes a CloudBioLinux Virtual Machine which is used upstream of an integrated High-performance Integrated Virtual Environment (HIVE) that encapsulates Curated Short Read archive (CSR) and a proteome-wide variation effect analysis tool (SNVDis). As a proof-of-concept, we have curated and analyzed control and case breast cancer datasets from the NCI cancer genomics program - The Cancer Genome Atlas (TCGA). Our efforts include reviewing and recording in CSR available clinical information on patients, mapping of the reads to the reference followed by identification of non-synonymous Single Nucleotide Variations (nsSNVs) and integrating the data with tools that allow analysis of effect nsSNVs on the human proteome. Furthermore, we have also developed a novel phylogenetic analysis algorithm that uses SNV positions and can be used to classify the patient population. The workflow described here lays the foundation for analysis of short read sequence data to identify rare and novel SNVs that are not present in dbSNP and therefore provides a more comprehensive understanding of the human variome. Variation results for single genes as well as the entire study are available from the CSR website (http://hive.biochemistry.gwu.edu/dna.cgi?cmd=csr). Availability of thousands of sequenced samples from patients provides a rich repository of sequence information that can be utilized to identify individual level SNVs and their effect on the human proteome beyond what the dbSNP database provides.
2014-01-01
Background Next-generation sequencing (NGS) technologies have resulted in petabytes of scattered data, decentralized in archives, databases and sometimes in isolated hard-disks which are inaccessible for browsing and analysis. It is expected that curated secondary databases will help organize some of this Big Data thereby allowing users better navigate, search and compute on it. Results To address the above challenge, we have implemented a NGS biocuration workflow and are analyzing short read sequences and associated metadata from cancer patients to better understand the human variome. Curation of variation and other related information from control (normal tissue) and case (tumor) samples will provide comprehensive background information that can be used in genomic medicine research and application studies. Our approach includes a CloudBioLinux Virtual Machine which is used upstream of an integrated High-performance Integrated Virtual Environment (HIVE) that encapsulates Curated Short Read archive (CSR) and a proteome-wide variation effect analysis tool (SNVDis). As a proof-of-concept, we have curated and analyzed control and case breast cancer datasets from the NCI cancer genomics program - The Cancer Genome Atlas (TCGA). Our efforts include reviewing and recording in CSR available clinical information on patients, mapping of the reads to the reference followed by identification of non-synonymous Single Nucleotide Variations (nsSNVs) and integrating the data with tools that allow analysis of effect nsSNVs on the human proteome. Furthermore, we have also developed a novel phylogenetic analysis algorithm that uses SNV positions and can be used to classify the patient population. The workflow described here lays the foundation for analysis of short read sequence data to identify rare and novel SNVs that are not present in dbSNP and therefore provides a more comprehensive understanding of the human variome. Variation results for single genes as well as the entire study are available from the CSR website (http://hive.biochemistry.gwu.edu/dna.cgi?cmd=csr). Conclusions Availability of thousands of sequenced samples from patients provides a rich repository of sequence information that can be utilized to identify individual level SNVs and their effect on the human proteome beyond what the dbSNP database provides. PMID:24467687
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
Gama-Castro, Socorro; Salgado, Heladia; Santos-Zavaleta, Alberto; Ledezma-Tejeida, Daniela; Muñiz-Rascado, Luis; García-Sotelo, Jair Santiago; Alquicira-Hernández, Kevin; Martínez-Flores, Irma; Pannier, Lucia; Castro-Mondragón, Jaime Abraham; Medina-Rivera, Alejandra; Solano-Lira, Hilda; Bonavides-Martínez, César; Pérez-Rueda, Ernesto; Alquicira-Hernández, Shirley; Porrón-Sotelo, Liliana; López-Fuentes, Alejandra; Hernández-Koutoucheva, Anastasia; Moral-Chávez, Víctor Del; Rinaldi, Fabio; Collado-Vides, Julio
2016-01-01
RegulonDB (http://regulondb.ccg.unam.mx) is one of the most useful and important resources on bacterial gene regulation,as it integrates the scattered scientific knowledge of the best-characterized organism, Escherichia coli K-12, in a database that organizes large amounts of data. Its electronic format enables researchers to compare their results with the legacy of previous knowledge and supports bioinformatics tools and model building. Here, we summarize our progress with RegulonDB since our last Nucleic Acids Research publication describing RegulonDB, in 2013. In addition to maintaining curation up-to-date, we report a collection of 232 interactions with small RNAs affecting 192 genes, and the complete repertoire of 189 Elementary Genetic Sensory-Response units (GENSOR units), integrating the signal, regulatory interactions, and metabolic pathways they govern. These additions represent major progress to a higher level of understanding of regulated processes. We have updated the computationally predicted transcription factors, which total 304 (184 with experimental evidence and 120 from computational predictions); we updated our position-weight matrices and have included tools for clustering them in evolutionary families. We describe our semiautomatic strategy to accelerate curation, including datasets from high-throughput experiments, a novel coexpression distance to search for ‘neighborhood’ genes to known operons and regulons, and computational developments. PMID:26527724
Automating document classification for the Immune Epitope Database
Wang, Peng; Morgan, Alexander A; Zhang, Qing; Sette, Alessandro; Peters, Bjoern
2007-01-01
Background The Immune Epitope Database contains information on immune epitopes curated manually from the scientific literature. Like similar projects in other knowledge domains, significant effort is spent on identifying which articles are relevant for this purpose. Results We here report our experience in automating this process using Naïve Bayes classifiers trained on 20,910 abstracts classified by domain experts. Improvements on the basic classifier performance were made by a) utilizing information stored in PubMed beyond the abstract itself b) applying standard feature selection criteria and c) extracting domain specific feature patterns that e.g. identify peptides sequences. We have implemented the classifier into the curation process determining if abstracts are clearly relevant, clearly irrelevant, or if no certain classification can be made, in which case the abstracts are manually classified. Testing this classification scheme on an independent dataset, we achieve 95% sensitivity and specificity in the 51.1% of abstracts that were automatically classified. Conclusion By implementing text classification, we have sped up the reference selection process without sacrificing sensitivity or specificity of the human expert classification. This study provides both practical recommendations for users of text classification tools, as well as a large dataset which can serve as a benchmark for tool developers. PMID:17655769
Benítez, José Alberto; Labra, José Emilio; Quiroga, Enedina; Martín, Vicente; García, Isaías; Marqués-Sánchez, Pilar; Benavides, Carmen
2017-01-01
There is a great concern nowadays regarding alcohol consumption and drug abuse, especially in young people. Analyzing the social environment where these adolescents are immersed, as well as a series of measures determining the alcohol abuse risk or personal situation and perception using a number of questionnaires like AUDIT, FAS, KIDSCREEN, and others, it is possible to gain insight into the current situation of a given individual regarding his/her consumption behavior. But this analysis, in order to be achieved, requires the use of tools that can ease the process of questionnaire creation, data gathering, curation and representation, and later analysis and visualization to the user. This research presents the design and construction of a web-based platform able to facilitate each of the mentioned processes by integrating the different phases into an intuitive system with a graphical user interface that hides the complexity underlying each of the questionnaires and techniques used and presenting the results in a flexible and visual way, avoiding any manual handling of data during the process. Advantages of this approach are shown and compared to the previous situation where some of the tasks were accomplished by time consuming and error prone manipulations of data.
MIPS: curated databases and comprehensive secondary data resources in 2010.
Mewes, H Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F X; Stümpflen, Volker; Antonov, Alexey
2011-01-01
The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38,000,000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de).
MIPS: curated databases and comprehensive secondary data resources in 2010
Mewes, H. Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F.X.; Stümpflen, Volker; Antonov, Alexey
2011-01-01
The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38 000 000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de). PMID:21109531
1987-01-29
Economic Upturn ^ (SAPA, 15 Dec 86) . • • • * *’ Heavy Rains Raise Hope of Record 1987 Maize Crop (Mick Collins; BUSINESS DAY, 17 Dec 86) New...been successful to the extent that the resi- dents had more than six months’ supply of vegetables to addition to growing green maize and wheat. Cde...displaced per- sons suggested that one immediate need, was that of vegetables. - The displaced persona were then train» ed In irrigation canal
2013-02-01
distribution managemen t operations to include managing cargo distribution functions such as receiving, inspecting, tracing, tracking, packaging, and...Production Management DE CDE ABCDEFG Scheduling DE ADEF ABCDEF T ie r 2 Flightline Operations E BDE Systems Engineering D ABDEG Table 19: 21R...logistics units/ elements and as members of general or executive s t affs in t he operating forces, supporting establishment, and joint staffs . They
Mount St. Helens Long-Term Sediment Management Plan for Flood Risk Reduction
2010-06-01
one dredge would direct pump to the Wasser Winters disposal site, located along the southern bank of the Cowlitz River mouth. The average annual...dredge would pipeline pump either upstream to disposal site 20cde or downstream to the Wasser Winters site. Pumping distances would not exceed 6.0...estimates referenced the Wasser Winters upland preparation estimates and were based on the relationship between acreage and effort. Total site
Substructural Logical Specifications
2012-11-14
and independently in the context of CLF by Schack-Nielsen [SN07] and by Cruz and Hou [CH12]; Schack-Nielsen proves the equivalence of the two specifi...cations, whereas Cruz and Hou used the connection informally. The contribution of this section is to describe a general transformation (of which...Functional Programming (LFP’86), pages 13–27. ACM, 1986. 5.1 [CDE+11] Manuel Clavel, Francisco Durán, Steven Eker, Patrick Lincoln, Narciso Martı́- 290 Oliet
An, Yi; Wang, Jiawei; Li, Chen; Leier, André; Marquez-Lago, Tatiana; Wilksch, Jonathan; Zhang, Yang; Webb, Geoffrey I; Song, Jiangning; Lithgow, Trevor
2018-01-01
Bacterial effector proteins secreted by various protein secretion systems play crucial roles in host-pathogen interactions. In this context, computational tools capable of accurately predicting effector proteins of the various types of bacterial secretion systems are highly desirable. Existing computational approaches use different machine learning (ML) techniques and heterogeneous features derived from protein sequences and/or structural information. These predictors differ not only in terms of the used ML methods but also with respect to the used curated data sets, the features selection and their prediction performance. Here, we provide a comprehensive survey and benchmarking of currently available tools for the prediction of effector proteins of bacterial types III, IV and VI secretion systems (T3SS, T4SS and T6SS, respectively). We review core algorithms, feature selection techniques, tool availability and applicability and evaluate the prediction performance based on carefully curated independent test data sets. In an effort to improve predictive performance, we constructed three ensemble models based on ML algorithms by integrating the output of all individual predictors reviewed. Our benchmarks demonstrate that these ensemble models outperform all the reviewed tools for the prediction of effector proteins of T3SS and T4SS. The webserver of the proposed ensemble methods for T3SS and T4SS effector protein prediction is freely available at http://tbooster.erc.monash.edu/index.jsp. We anticipate that this survey will serve as a useful guide for interested users and that the new ensemble predictors will stimulate research into host-pathogen relationships and inspiration for the development of new bioinformatics tools for predicting effector proteins of T3SS, T4SS and T6SS. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Shorish, Yasmeen
2012-01-01
This article describes the fundamental challenges to data curation, how these challenges may be compounded for smaller institutions, and how data management is an essential and manageable component of data curation. Data curation is often discussed within the confines of large research universities. As a result, master's and baccalaureate…
Argo: enabling the development of bespoke workflows and services for disease annotation.
Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia
2016-01-01
Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest.With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources.This article presents the application of Argo's capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V's User Interactive Track (IAT), we demonstrated and evaluated Argo's suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track's top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo's support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo's potential as an enabling technology for curating disease and phenotypic information from literature.Database URL: http://argo.nactem.ac.uk. © The Author(s) 2016. Published by Oxford University Press.
Argo: enabling the development of bespoke workflows and services for disease annotation
Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia
2016-01-01
Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest. With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources. This article presents the application of Argo’s capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V’s User Interactive Track (IAT), we demonstrated and evaluated Argo’s suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track’s top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo’s support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo’s potential as an enabling technology for curating disease and phenotypic information from literature. Database URL: http://argo.nactem.ac.uk PMID:27189607
Enhancing Next-Generation Sequencing-Guided Cancer Care Through Cognitive Computing.
Patel, Nirali M; Michelini, Vanessa V; Snell, Jeff M; Balu, Saianand; Hoyle, Alan P; Parker, Joel S; Hayward, Michele C; Eberhard, David A; Salazar, Ashley H; McNeillie, Patrick; Xu, Jia; Huettner, Claudia S; Koyama, Takahiko; Utro, Filippo; Rhrissorrakrai, Kahn; Norel, Raquel; Bilal, Erhan; Royyuru, Ajay; Parida, Laxmi; Earp, H Shelton; Grilley-Olson, Juneko E; Hayes, D Neil; Harvey, Stephen J; Sharpless, Norman E; Kim, William Y
2018-02-01
Using next-generation sequencing (NGS) to guide cancer therapy has created challenges in analyzing and reporting large volumes of genomic data to patients and caregivers. Specifically, providing current, accurate information on newly approved therapies and open clinical trials requires considerable manual curation performed mainly by human "molecular tumor boards" (MTBs). The purpose of this study was to determine the utility of cognitive computing as performed by Watson for Genomics (WfG) compared with a human MTB. One thousand eighteen patient cases that previously underwent targeted exon sequencing at the University of North Carolina (UNC) and subsequent analysis by the UNCseq informatics pipeline and the UNC MTB between November 7, 2011, and May 12, 2015, were analyzed with WfG, a cognitive computing technology for genomic analysis. Using a WfG-curated actionable gene list, we identified additional genomic events of potential significance (not discovered by traditional MTB curation) in 323 (32%) patients. The majority of these additional genomic events were considered actionable based upon their ability to qualify patients for biomarker-selected clinical trials. Indeed, the opening of a relevant clinical trial within 1 month prior to WfG analysis provided the rationale for identification of a new actionable event in nearly a quarter of the 323 patients. This automated analysis took <3 minutes per case. These results demonstrate that the interpretation and actionability of somatic NGS results are evolving too rapidly to rely solely on human curation. Molecular tumor boards empowered by cognitive computing could potentially improve patient care by providing a rapid, comprehensive approach for data analysis and consideration of up-to-date availability of clinical trials. The results of this study demonstrate that the interpretation and actionability of somatic next-generation sequencing results are evolving too rapidly to rely solely on human curation. Molecular tumor boards empowered by cognitive computing can significantly improve patient care by providing a fast, cost-effective, and comprehensive approach for data analysis in the delivery of precision medicine. Patients and physicians who are considering enrollment in clinical trials may benefit from the support of such tools applied to genomic data. © AlphaMed Press 2017.
The French initiative for scientific cores virtual curating : a user-oriented integrated approach
NASA Astrophysics Data System (ADS)
Pignol, Cécile; Godinho, Elodie; Galabertier, Bruno; Caillo, Arnaud; Bernardet, Karim; Augustin, Laurent; Crouzet, Christian; Billy, Isabelle; Teste, Gregory; Moreno, Eva; Tosello, Vanessa; Crosta, Xavier; Chappellaz, Jérome; Calzas, Michel; Rousseau, Denis-Didier; Arnaud, Fabien
2016-04-01
Managing scientific data is probably one the most challenging issue in modern science. The question is made even more sensitive with the need of preserving and managing high value fragile geological sam-ples: cores. Large international scientific programs, such as IODP or ICDP are leading an intense effort to solve this problem and propose detailed high standard work- and dataflows thorough core handling and curating. However most results derived from rather small-scale research programs in which data and sample management is generally managed only locally - when it is … The national excellence equipment program (Equipex) CLIMCOR aims at developing French facilities for coring and drilling investigations. It concerns indiscriminately ice, marine and continental samples. As part of this initiative, we initiated a reflexion about core curating and associated coring-data management. The aim of the project is to conserve all metadata from fieldwork in an integrated cyber-environment which will evolve toward laboratory-acquired data storage in a near future. In that aim, our demarche was conducted through an close relationship with field operators as well laboratory core curators in order to propose user-oriented solutions. The national core curating initiative currently proposes a single web portal in which all scientifics teams can store their field data. For legacy samples, this will requires the establishment of a dedicated core lists with associated metadata. For forthcoming samples, we propose a mobile application, under Android environment to capture technical and scientific metadata on the field. This application is linked with a unique coring tools library and is adapted to most coring devices (gravity, drilling, percussion, etc...) including multiple sections and holes coring operations. Those field data can be uploaded automatically to the national portal, but also referenced through international standards or persistent identifiers (IGSN, ORCID and INSPIRE) and displayed in international portals (currently, NOAA's IMLGS). In this paper, we present the architecture of the integrated system, future perspectives and the approach we adopted to reach our goals. We will also present in front of our poster, one of the three mobile applications, dedicated more particularly to the operations of continental drillings.
The Internet of Samples in the Earth Sciences: Providing Access to Uncurated Collections
NASA Astrophysics Data System (ADS)
Carter, M. R.; Lehnert, K. A.
2014-12-01
Vast amounts of physical samples have been collected in the Earth Sciences for studies that address a wide range of scientific questions. Only a fraction of these samples are well curated and preserved long-term in sample repositories and museums. Many samples and collections are stored in the offices and labs of investigators, or in basements and sheds of institutions and investigators' homes. These 'uncurated' collections often contain samples that have been well studied, or are unique and irreplaceable. They may also include samples that could reveal new insights if re-analyzed using new techniques, or specimens that could have unanticipated relevance to research being conducted in fields other than the one for which they were collected. Currently, these samples cannot be accessed or discovered online by the broader science community. Investigators and departments often lack the resources to properly catalog and curate the samples and respond to requests for splits. Long-term preservation of and access to these samples is usually not provided for. iSamplES, a recently-funded EarthCube Research Coordination Network (RCN), seeks to integrate scientific samples, including 'uncurated' samples, into digital data and information infrastructure in the Earth Sciences and to facilitate their curation, discovery, access, sharing, and analysis. The RCN seeks to develop and implement best practices that increase digital access to samples with the goal of establishing a comprehensive infrastructure not only for the digital, but also physical curation of samples. The RCN will engage a broad group of individuals from domain scientists to curators to publishers to computer scientists to define, articulate, and address the needs and challenges of digital sample management and recommend community-endorsed best practices and standards for registering, describing, identifying, and citing physical specimens, drawing upon other initiatives and existing or emerging software tools for digital sample and collection management. Community engagement will include surveys, in-person workshops and outreach events, the creation of the iSamplES knowledge hub (semantic wiki) and a registry of collections. iSamplES will specifically engage early career scientists to encourage that no samples go uncurated.
Malik, Praveen K; Dewan, Taru; Patidar, Arun Kr; Sain, Ekta
2017-01-01
To evaluate the effect of three different combinations of tip designs and infusion systems in torsional phacoemulsification (INFINITI and CENTURION) in patients with cataract. According to the manufacturer, two unique improvements in the Centurion are: active fluid dynamic management system and use of an intrepid balanced tip. The study specifically aimed to evaluate the beneficial effects, if any, of change in tip design and infusion system individually and in combination on both per-operative parameters as well as endothelial health over 6 months. One hundred and twenty six consenting patients of grade 4.0-6.9 senile cataract were randomized into three groups for phacoemulsification: Group A ( n = 42): Gravity fed infusion system and 45 0 Kelman miniflared ABS phaco tip; Group B ( n = 42): intraocular pressure (IOP) based infusion system and 45 0 Kelman miniflared ABS phaco tip; Group C ( n = 42): IOP based infusion system and 45 0 Intrepid balanced phaco tip. The cumulative dissipated energy (CDE), estimated fluid usage (EFU) and total aspiration time (TAT) were compared peroperatively. The endothelial parameters were followed up postoperatively for six months. The three arms were matched for age ( p = 0.525), gender ( p = 0.96) and grade of cataract ( p = 0.177). Group C was associated with significant reductions in CDE ( p = 0.001), EFU ( p < 0.0005) as well as TAT ( p = 0.001) in comparison to the other groups. All three groups had comparable baseline endothelial cell density ( p = 0.876) and central corneal thickness ( p = 0.561). On post-operative evaluation, although all groups were comparable till 3 months, by 6 months, the percentage losses in endothelial cell density were significantly lower in group C as compared to the other groups. Use of an IOP based phacoemulsification system in association with use of the Intrepid balanced tip reduces the CDE, EFU and TAT in comparison to a gravity fed system with a mini flared tip or IOP based system with a mini flared tip while also providing better endothelial preservation thus favouring the use of an IOP fed system with a balanced tip. Trial registration No.: CTRI/2016/06/007022.
The Common Gut Microbe Eubacterium hallii also Contributes to Intestinal Propionate Formation
Engels, Christina; Ruscheweyh, Hans-Joachim; Beerenwinkel, Niko; Lacroix, Christophe; Schwab, Clarissa
2016-01-01
Eubacterium hallii is considered an important microbe in regard to intestinal metabolic balance due to its ability to utilize glucose and the fermentation intermediates acetate and lactate, to form butyrate and hydrogen. Recently, we observed that E. hallii is capable of metabolizing glycerol to 3-hydroxypropionaldehyde (3-HPA, reuterin) with reported antimicrobial properties. The key enzyme for glycerol to 3-HPA conversion is the cobalamin-dependent glycerol/diol dehydratase PduCDE which also utilizes 1,2-propanediol (1,2-PD) to form propionate. Therefore our primary goal was to investigate glycerol to 3-HPA metabolism and 1,2-PD utilization by E. hallii along with its ability to produce cobalamin. We also investigated the relative abundance of E. hallii in stool of adults using 16S rRNA and pduCDE based gene screening to determine the contribution of E. hallii to intestinal propionate formation. We found that E. hallii utilizes glycerol to produce up to 9 mM 3-HPA but did not further metabolize 3-HPA to 1,3-propanediol. Utilization of 1,2-PD in the presence and absence of glucose led to the formation of propanal, propanol and propionate. E. hallii formed cobalamin and was detected in stool of 74% of adults using 16S rRNA gene as marker gene (n = 325). Relative abundance of the E. hallii 16S rRNA gene ranged from 0 to 0.59% with a mean relative abundance of 0.044%. E. hallii PduCDE was detected in 63 to 81% of the metagenomes depending on which subunit was investigated beside other taxons such as Ruminococcus obeum, R. gnavus, Flavonifractor plautii, Intestinimonas butyriciproducens, and Veillonella spp. In conclusion, we identified E. hallii as a common gut microbe with the ability to convert glycerol to 3-HPA, a step that requires the production of cobalamin, and to utilize 1,2-PD to form propionate. Our results along with its ability to use a broad range of substrates point at E. hallii as a key species within the intestinal trophic chain with the potential to highly impact the metabolic balance as well as the gut microbiota/host homeostasis by the formation of different short chain fatty acids. PMID:27242734
The Common Gut Microbe Eubacterium hallii also Contributes to Intestinal Propionate Formation.
Engels, Christina; Ruscheweyh, Hans-Joachim; Beerenwinkel, Niko; Lacroix, Christophe; Schwab, Clarissa
2016-01-01
Eubacterium hallii is considered an important microbe in regard to intestinal metabolic balance due to its ability to utilize glucose and the fermentation intermediates acetate and lactate, to form butyrate and hydrogen. Recently, we observed that E. hallii is capable of metabolizing glycerol to 3-hydroxypropionaldehyde (3-HPA, reuterin) with reported antimicrobial properties. The key enzyme for glycerol to 3-HPA conversion is the cobalamin-dependent glycerol/diol dehydratase PduCDE which also utilizes 1,2-propanediol (1,2-PD) to form propionate. Therefore our primary goal was to investigate glycerol to 3-HPA metabolism and 1,2-PD utilization by E. hallii along with its ability to produce cobalamin. We also investigated the relative abundance of E. hallii in stool of adults using 16S rRNA and pduCDE based gene screening to determine the contribution of E. hallii to intestinal propionate formation. We found that E. hallii utilizes glycerol to produce up to 9 mM 3-HPA but did not further metabolize 3-HPA to 1,3-propanediol. Utilization of 1,2-PD in the presence and absence of glucose led to the formation of propanal, propanol and propionate. E. hallii formed cobalamin and was detected in stool of 74% of adults using 16S rRNA gene as marker gene (n = 325). Relative abundance of the E. hallii 16S rRNA gene ranged from 0 to 0.59% with a mean relative abundance of 0.044%. E. hallii PduCDE was detected in 63 to 81% of the metagenomes depending on which subunit was investigated beside other taxons such as Ruminococcus obeum, R. gnavus, Flavonifractor plautii, Intestinimonas butyriciproducens, and Veillonella spp. In conclusion, we identified E. hallii as a common gut microbe with the ability to convert glycerol to 3-HPA, a step that requires the production of cobalamin, and to utilize 1,2-PD to form propionate. Our results along with its ability to use a broad range of substrates point at E. hallii as a key species within the intestinal trophic chain with the potential to highly impact the metabolic balance as well as the gut microbiota/host homeostasis by the formation of different short chain fatty acids.
Recommendations for Locus-Specific Databases and Their Curation
Cotton, R.G.H.; Auerbach, A.D.; Beckmann, J.S.; Blumenfeld, O.O.; Brookes, A.J.; Brown, A.F.; Carrera, P.; Cox, D.W.; Gottlieb, B.; Greenblatt, M.S.; Hilbert, P.; Lehvaslaiho, H.; Liang, P.; Marsh, S.; Nebert, D.W.; Povey, S.; Rossetti, S.; Scriver, C.R.; Summar, M.; Tolan, D.R.; Verma, I.C.; Vihinen, M.; den Dunnen, J.T.
2009-01-01
Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome. PMID:18157828
Recommendations for locus-specific databases and their curation.
Cotton, R G H; Auerbach, A D; Beckmann, J S; Blumenfeld, O O; Brookes, A J; Brown, A F; Carrera, P; Cox, D W; Gottlieb, B; Greenblatt, M S; Hilbert, P; Lehvaslaiho, H; Liang, P; Marsh, S; Nebert, D W; Povey, S; Rossetti, S; Scriver, C R; Summar, M; Tolan, D R; Verma, I C; Vihinen, M; den Dunnen, J T
2008-01-01
Expert curation and complete collection of mutations in genes that affect human health is essential for proper genetic healthcare and research. Expert curation is given by the curators of gene-specific mutation databases or locus-specific databases (LSDBs). While there are over 700 such databases, they vary in their content, completeness, time available for curation, and the expertise of the curator. Curation and LSDBs have been discussed, written about, and protocols have been provided for over 10 years, but there have been no formal recommendations for the ideal form of these entities. This work initiates a discussion on this topic to assist future efforts in human genetics. Further discussion is welcome. (c) 2007 Wiley-Liss, Inc.
Data Curation: Improving Environmental Health Data Quality.
Yang, Lin; Li, Jiao; Hou, Li; Qian, Qing
2015-01-01
With the growing recognition of the influence of climate change on human health, scientists' attention to analyzing the relationship between meteorological factors and adverse health effects. However, the paucity of high quality integrated data is one of the great challenges, especially when scientific studies rely on data-intensive computing. This paper aims to design an appropriate curation process to address this problem. We present a data curation workflow that: (i) follows the guidance of DCC Curation Lifecycle Model; (ii) combines manual curation with automatic curation; (iii) and solves environmental health data curation problem. The workflow was applied to a medical knowledge service system and showed that it was capable of improving work efficiency and data quality.
Pathway Tools version 19.0 update: software for pathway/genome informatics and systems biology
Latendresse, Mario; Paley, Suzanne M.; Krummenacker, Markus; Ong, Quang D.; Billington, Richard; Kothari, Anamika; Weaver, Daniel; Lee, Thomas; Subhraveti, Pallavi; Spaulding, Aaron; Fulcher, Carol; Keseler, Ingrid M.; Caspi, Ron
2016-01-01
Pathway Tools is a bioinformatics software environment with a broad set of capabilities. The software provides genome-informatics tools such as a genome browser, sequence alignments, a genome-variant analyzer and comparative-genomics operations. It offers metabolic-informatics tools, such as metabolic reconstruction, quantitative metabolic modeling, prediction of reaction atom mappings and metabolic route search. Pathway Tools also provides regulatory-informatics tools, such as the ability to represent and visualize a wide range of regulatory interactions. This article outlines the advances in Pathway Tools in the past 5 years. Major additions include components for metabolic modeling, metabolic route search, computation of atom mappings and estimation of compound Gibbs free energies of formation; addition of editors for signaling pathways, for genome sequences and for cellular architecture; storage of gene essentiality data and phenotype data; display of multiple alignments, and of signaling and electron-transport pathways; and development of Python and web-services application programming interfaces. Scientists around the world have created more than 9800 Pathway/Genome Databases by using Pathway Tools, many of which are curated databases for important model organisms. PMID:26454094
DEXTER: Disease-Expression Relation Extraction from Text.
Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K
2018-01-01
Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.
Global Metabolic Reconstruction and Metabolic Gene Evolution in the Cattle Genome
Kim, Woonsu; Park, Hyesun; Seo, Seongwon
2016-01-01
The sequence of cattle genome provided a valuable opportunity to systematically link genetic and metabolic traits of cattle. The objectives of this study were 1) to reconstruct genome-scale cattle-specific metabolic pathways based on the most recent and updated cattle genome build and 2) to identify duplicated metabolic genes in the cattle genome for better understanding of metabolic adaptations in cattle. A bioinformatic pipeline of an organism for amalgamating genomic annotations from multiple sources was updated. Using this, an amalgamated cattle genome database based on UMD_3.1, was created. The amalgamated cattle genome database is composed of a total of 33,292 genes: 19,123 consensus genes between NCBI and Ensembl databases, 8,410 and 5,493 genes only found in NCBI or Ensembl, respectively, and 266 genes from NCBI scaffolds. A metabolic reconstruction of the cattle genome and cattle pathway genome database (PGDB) was also developed using Pathway Tools, followed by an intensive manual curation. The manual curation filled or revised 68 pathway holes, deleted 36 metabolic pathways, and added 23 metabolic pathways. Consequently, the curated cattle PGDB contains 304 metabolic pathways, 2,460 reactions including 2,371 enzymatic reactions, and 4,012 enzymes. Furthermore, this study identified eight duplicated genes in 12 metabolic pathways in the cattle genome compared to human and mouse. Some of these duplicated genes are related with specific hormone biosynthesis and detoxifications. The updated genome-scale metabolic reconstruction is a useful tool for understanding biology and metabolic characteristics in cattle. There has been significant improvements in the quality of cattle genome annotations and the MetaCyc database. The duplicated metabolic genes in the cattle genome compared to human and mouse implies evolutionary changes in the cattle genome and provides a useful information for further research on understanding metabolic adaptations of cattle. PMID:26992093
Network models of biology, whether curated or derived from large-scale data analysis, are critical tools in the understanding of cancer mechanisms and in the design and personalization of therapies. The NDEx Project (Network Data Exchange) will create, deploy, and maintain an open-source, web-based software platform and public website to enable scientists, organizations, and software applications to share, store, manipulate, and publish biological networks.
The Causal Mediation Formula - A Guide to the Assessment of Pathways and Mechanisms
2011-10-01
for estimating CDE(z) in observational studies (in the presence of unobserved confounders) can be derived using do- calculus (Pearl, 2009, pp. 85–88...2011; Albert and Nelson , 2011). 7In the presence of measured and unmeasured confounders, the general conditions under which NDE is estimable from...achieve this independence (see footnote 7), that Z may represent a vector of variables, and that integrals should replace summations when dealing with
Comparison of torsional and longitudinal modes using phacoemulsification parameters.
Rekas, Marek; Montés-Micó, Robert; Krix-Jachym, Karolina; Kluś, Adam; Stankiewicz, Andrzej; Ferrer-Blasco, Teresa
2009-10-01
To compare phacoemulsification parameters of torsional and longitudinal ultrasound modes. Ophthalmology Department, Military Health Service Institute, Warsaw, Poland. This prospective study evaluated eyes 1, 7, and 30 days after phacoemulsification with an Infiniti Vision System using the torsional or longitudinal ultrasound (US) mode. Cataract classification was according to the Lens Opacities Classification System II. Nucleus fragmentation was by the phaco-chop and quick-chop methods. Primary outcome measures were phaco time, mean phaco power, mean torsional amplitude, and aspiration time. Total energy, defined as cumulative dissipated energy (CDE) x aspiration time, and the effective coefficient, defined as aspiration time/phaco time, were also calculated. Four hundred eyes were evaluated. The CDE was statistically significantly lower in the torsional mode for nucleus grades I, II, and III (P<.001) but not for grade IV (P>.05). Aspiration time was statistically significantly shorter in the torsional mode than in the longitudinal mode for nucleus grades III and IV (P<.05). Total energy was significantly lower in the torsional mode for all nucleus densities (P<.05). The effective coefficient was significantly lower in the longitudinal mode except for nucleus grade I (P<.05). Torsional phacoemulsification was more effective than longitudinal phacoemulsification in the amount of applied fluid and the quantity of US energy expended. With the torsional method, it was possible to maintain a constant ratio of amount of fluid flow to quantity of US energy used, regardless of nucleus density.
Funkhouser, Ellen; Agee, Bonita S.; Gordan, Valeria V.; Rindal, D. Brad; Fellows, Jeffrey L.; Qvist, Vibeke; McClelland, Jocelyn; Gilbert, Gregg H.
2013-01-01
Objectives Estimate the proportion of dental practitioners who use online sources of information for practice guidance. Methods From a survey of 657 dental practitioners in The Dental Practice Based Research Network, four indicators of online use for practice guidance were calculated: read journals online, obtained continuing education (CDE) through online sources, rated an online source as most influential, and reported frequently using an online source for guidance. Demographics, journals read, and use of various sources of information for practice guidance in terms of frequency and influence were ascertained for each. Results Overall, 21% (n=138) were classified into one of the four indicators of online use: 14% (n=89) rated an online source as most influential and 13% (n=87) reported frequently using an online source for guidance; few practitioners (5%, n=34) read journals online, fewer (3%, n=17) obtained CDE through online sources. Use of online information sources varied considerably by region and practice characteristics. In general, the 4 indicators represented practitioners with as many differences as similarities to each other and to offline users. Conclusion A relatively small proportion of dental practitioners use information from online sources for practice guidance. Variation exists regarding practitioners’ use of online source resources and how they rate the value of offline information sources for practice guidance. PMID:22994848
Tang, Guirong; Wang, Ying
2014-01-01
Rhizobia induce nitrogen-fixing nodules on host legumes, which is important in agriculture and ecology. Lipopolysaccharide (LPS) produced by rhizobia is required for infection or bacteroid survival in host cells. Genes required for LPS biosynthesis have been identified in several Rhizobium species. However, the regulation of their expression is not well understood. Here, Sinorhizobium meliloti LsrB, a member of the LysR family of transcriptional regulators, was found to be involved in LPS biosynthesis by positively regulating the expression of the lrp3-lpsCDE operon. An lsrB in-frame deletion mutant displayed growth deficiency, sensitivity to the detergent sodium dodecyl sulfate, and acidic pH compared to the parent strain. This mutant produced slightly less LPS due to lower expression of the lrp3 operon. Analysis of the transcriptional start sites of the lrp3 and lpsCDE gene suggested that they constitute one operon. The expression of lsrB was positively autoregulated. The promoter region of lrp3 was specifically precipitated by anti-LsrB antibodies in vivo. The promoter DNA fragment containing TN11A motifs was bound by the purified LsrB protein in vitro. These new findings suggest that S. meliloti LsrB is associated with LPS biosynthesis, which is required for symbiotic nitrogen fixation on some ecotypes of alfalfa plants. PMID:24951786
Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.
Zappia, Luke; Phipson, Belinda; Oshlack, Alicia
2018-06-25
As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.
NASA Technical Reports Server (NTRS)
McCubbin, Francis M.; Zeigler, Ryan A.
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Evans, C. A.; Fries, M. D.; Harrington, A. D.; Regberg, A. B.; Snead, C. J.; Zeigler, R. A.
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for re-search, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Allton, J. H.; Barnes, J. J.; Boyce, J. W.; Burton, A. S.; Draper, D. S.; Evans, C. A.; Fries, M. D.; Jones, J. H.; Keller, L. P.;
2017-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. JSC presently curates 9 different astromaterials collections: (1) Apollo samples, (2) LUNA samples, (3) Antarctic meteorites, (4) Cosmic dust particles, (5) Microparticle Impact Collection [formerly called Space Exposed Hardware], (6) Genesis solar wind, (7) Star-dust comet Wild-2 particles, (8) Stardust interstellar particles, and (9) Hayabusa asteroid Itokawa particles. In addition, the next missions bringing carbonaceous asteroid samples to JSC are Hayabusa 2/ asteroid Ryugu and OSIRIS-Rex/ asteroid Bennu, in 2021 and 2023, respectively. The Hayabusa 2 samples are provided as part of an international agreement with JAXA. The NASA Curation Office plans for the requirements of future collections in an "Advanced Curation" program. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. Here we review the science value and sample curation needs of some potential targets for sample return missions over the next 35 years.
The Importance of Contamination Knowledge in Curation - Insights into Mars Sample Return
NASA Technical Reports Server (NTRS)
Harrington, A. D.; Calaway, M. J.; Regberg, A. B.; Mitchell, J. L.; Fries, M. D.; Zeigler, R. A.; McCubbin, F. M.
2018-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC), in Houston, TX (henceforth Curation Office) manages the curation of extraterrestrial samples returned by NASA missions and shared collections from international partners, preserving their integrity for future scientific study while providing the samples to the international community in a fair and unbiased way. The Curation Office also curates flight and non-flight reference materials and other materials from spacecraft assembly (e.g., lubricants, paints and gases) of sample return missions that would have the potential to cross-contaminate a present or future NASA astromaterials collection.
Exploring Short Linear Motifs Using the ELM Database and Tools.
Gouw, Marc; Sámano-Sánchez, Hugo; Van Roey, Kim; Diella, Francesca; Gibson, Toby J; Dinkel, Holger
2017-06-27
The Eukaryotic Linear Motif (ELM) resource is dedicated to the characterization and prediction of short linear motifs (SLiMs). SLiMs are compact, degenerate peptide segments found in many proteins and essential to almost all cellular processes. However, despite their abundance, SLiMs remain largely uncharacterized. The ELM database is a collection of manually annotated SLiM instances curated from experimental literature. In this article we illustrate how to browse and search the database for curated SLiM data, and cover the different types of data integrated in the resource. We also cover how to use this resource in order to predict SLiMs in known as well as novel proteins, and how to interpret the results generated by the ELM prediction pipeline. The ELM database is a very rich resource, and in the following protocols we give helpful examples to demonstrate how this knowledge can be used to improve your own research. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Venkatesan, Aravind; Kim, Jee-Hyub; Talo, Francesco; Ide-Smith, Michele; Gobeill, Julien; Carter, Jacob; Batista-Navarro, Riza; Ananiadou, Sophia; Ruch, Patrick; McEntyre, Johanna
2016-01-01
The tremendous growth in biological data has resulted in an increase in the number of research papers being published. This presents a great challenge for scientists in searching and assimilating facts described in those papers. Particularly, biological databases depend on curators to add highly precise and useful information that are usually extracted by reading research articles. Therefore, there is an urgent need to find ways to improve linking literature to the underlying data, thereby minimising the effort in browsing content and identifying key biological concepts. As part of the development of Europe PMC, we have developed a new platform, SciLite, which integrates text-mined annotations from different sources and overlays those outputs on research articles. The aim is to aid researchers and curators using Europe PMC in finding key concepts more easily and provide links to related resources or tools, bridging the gap between literature and biological data.
Talo, Francesco; Ide-Smith, Michele; Gobeill, Julien; Carter, Jacob; Batista-Navarro, Riza; Ananiadou, Sophia; Ruch, Patrick; McEntyre, Johanna
2017-01-01
The tremendous growth in biological data has resulted in an increase in the number of research papers being published. This presents a great challenge for scientists in searching and assimilating facts described in those papers. Particularly, biological databases depend on curators to add highly precise and useful information that are usually extracted by reading research articles. Therefore, there is an urgent need to find ways to improve linking literature to the underlying data, thereby minimising the effort in browsing content and identifying key biological concepts. As part of the development of Europe PMC, we have developed a new platform, SciLite, which integrates text-mined annotations from different sources and overlays those outputs on research articles. The aim is to aid researchers and curators using Europe PMC in finding key concepts more easily and provide links to related resources or tools, bridging the gap between literature and biological data. PMID:28948232
A Comprehensive Curation Shows the Dynamic Evolutionary Patterns of Prokaryotic CRISPRs.
Mai, Guoqin; Ge, Ruiquan; Sun, Guoquan; Meng, Qinghan; Zhou, Fengfeng
2016-01-01
Motivation. Clustered regularly interspaced short palindromic repeat (CRISPR) is a genetic element with active regulation roles for foreign invasive genes in the prokaryotic genomes and has been engineered to work with the CRISPR-associated sequence (Cas) gene Cas9 as one of the modern genome editing technologies. Due to inconsistent definitions, the existing CRISPR detection programs seem to have missed some weak CRISPR signals. Results. This study manually curates all the currently annotated CRISPR elements in the prokaryotic genomes and proposes 95 updates to the annotations. A new definition is proposed to cover all the CRISPRs. The comprehensive comparison of CRISPR numbers on the taxonomic levels of both domains and genus shows high variations for closely related species even in the same genus. The detailed investigation of how CRISPRs are evolutionarily manipulated in the 8 completely sequenced species in the genus Thermoanaerobacter demonstrates that transposons act as a frequent tool for splitting long CRISPRs into shorter ones along a long evolutionary history.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
Text mining for neuroanatomy using WhiteText with an updated corpus and a new web application
French, Leon; Liu, Po; Marais, Olivia; Koreman, Tianna; Tseng, Lucia; Lai, Artemis; Pavlidis, Paul
2015-01-01
We describe the WhiteText project, and its progress towards automatically extracting statements of neuroanatomical connectivity from text. We review progress to date on the three main steps of the project: recognition of brain region mentions, standardization of brain region mentions to neuroanatomical nomenclature, and connectivity statement extraction. We further describe a new version of our manually curated corpus that adds 2,111 connectivity statements from 1,828 additional abstracts. Cross-validation classification within the new corpus replicates results on our original corpus, recalling 67% of connectivity statements at 51% precision. The resulting merged corpus provides 5,208 connectivity statements that can be used to seed species-specific connectivity matrices and to better train automated techniques. Finally, we present a new web application that allows fast interactive browsing of the over 70,000 sentences indexed by the system, as a tool for accessing the data and assisting in further curation. Software and data are freely available at http://www.chibi.ubc.ca/WhiteText/. PMID:26052282
EcoCyc: a comprehensive database resource for Escherichia coli
Keseler, Ingrid M.; Collado-Vides, Julio; Gama-Castro, Socorro; Ingraham, John; Paley, Suzanne; Paulsen, Ian T.; Peralta-Gil, Martín; Karp, Peter D.
2005-01-01
The EcoCyc database (http://EcoCyc.org/) is a comprehensive source of information on the biology of the prototypical model organism Escherichia coli K12. The mission for EcoCyc is to contain both computable descriptions of, and detailed comments describing, all genes, proteins, pathways and molecular interactions in E.coli. Through ongoing manual curation, extensive information such as summary comments, regulatory information, literature citations and evidence types has been extracted from 8862 publications and added to Version 8.5 of the EcoCyc database. The EcoCyc database can be accessed through a World Wide Web interface, while the downloadable Pathway Tools software and data files enable computational exploration of the data and provide enhanced querying capabilities that web interfaces cannot support. For example, EcoCyc contains carefully curated information that can be used as training sets for bioinformatics prediction of entities such as promoters, operons, genetic networks, transcription factor binding sites, metabolic pathways, functionally related genes, protein complexes and protein–ligand interactions. PMID:15608210
Mathelier, Anthony; Zhao, Xiaobei; Zhang, Allen W.; Parcy, François; Worsley-Hunt, Rebecca; Arenillas, David J.; Buchman, Sorana; Chen, Chih-yu; Chou, Alice; Ienasescu, Hans; Lim, Jonathan; Shyr, Casper; Tan, Ge; Zhou, Michelle; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W.
2014-01-01
JASPAR (http://jaspar.genereg.net) is the largest open-access database of matrix-based nucleotide profiles describing the binding preference of transcription factors from multiple species. The fifth major release greatly expands the heart of JASPAR—the JASPAR CORE subcollection, which contains curated, non-redundant profiles—with 135 new curated profiles (74 in vertebrates, 8 in Drosophila melanogaster, 10 in Caenorhabditis elegans and 43 in Arabidopsis thaliana; a 30% increase in total) and 43 older updated profiles (36 in vertebrates, 3 in D. melanogaster and 4 in A. thaliana; a 9% update in total). The new and updated profiles are mainly derived from published chromatin immunoprecipitation-seq experimental datasets. In addition, the web interface has been enhanced with advanced capabilities in browsing, searching and subsetting. Finally, the new JASPAR release is accompanied by a new BioPython package, a new R tool package and a new R/Bioconductor data package to facilitate access for both manual and automated methods. PMID:24194598
Mathelier, Anthony; Zhao, Xiaobei; Zhang, Allen W; Parcy, François; Worsley-Hunt, Rebecca; Arenillas, David J; Buchman, Sorana; Chen, Chih-yu; Chou, Alice; Ienasescu, Hans; Lim, Jonathan; Shyr, Casper; Tan, Ge; Zhou, Michelle; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W
2014-01-01
JASPAR (http://jaspar.genereg.net) is the largest open-access database of matrix-based nucleotide profiles describing the binding preference of transcription factors from multiple species. The fifth major release greatly expands the heart of JASPAR-the JASPAR CORE subcollection, which contains curated, non-redundant profiles-with 135 new curated profiles (74 in vertebrates, 8 in Drosophila melanogaster, 10 in Caenorhabditis elegans and 43 in Arabidopsis thaliana; a 30% increase in total) and 43 older updated profiles (36 in vertebrates, 3 in D. melanogaster and 4 in A. thaliana; a 9% update in total). The new and updated profiles are mainly derived from published chromatin immunoprecipitation-seq experimental datasets. In addition, the web interface has been enhanced with advanced capabilities in browsing, searching and subsetting. Finally, the new JASPAR release is accompanied by a new BioPython package, a new R tool package and a new R/Bioconductor data package to facilitate access for both manual and automated methods.
NASA Technical Reports Server (NTRS)
Fletcher, L. A.; Allen, C. C.; Bastien, R.
2008-01-01
NASA's Johnson Space Center (JSC) and the Astromaterials Curator are charged by NPD 7100.10D with the curation of all of NASA s extraterrestrial samples, including those from future missions. This responsibility includes the development of new sample handling and preparation techniques; therefore, the Astromaterials Curator must begin developing procedures to preserve, prepare and ship samples at sub-freezing temperatures in order to enable future sample return missions. Such missions might include the return of future frozen samples from permanently-shadowed lunar craters, the nuclei of comets, the surface of Mars, etc. We are demonstrating the ability to curate samples under cold conditions by designing, installing and testing a cold curation glovebox. This glovebox will allow us to store, document, manipulate and subdivide frozen samples while quantifying and minimizing contamination throughout the curation process.
ERIC Educational Resources Information Center
McCoy, Floyd W.
1977-01-01
Reports on a recent meeting of marine curators in which data dissemination, standardization of marine curating techniques and methods, responsibilities of curators, funding problems, and sampling equipment were the main areas of discussion. A listing of the major deep sea sample collections in the United States is also provided. (CP)
The importance of data curation on QSAR Modeling ...
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These modern tools allow for more advanced techniques in terms of chemical structure representation and storage, as well as enabling automated data-mining and standardization approaches to examine and fix data quality issues.This presentation will investigate the impact of data curation on the reliability of QSAR models being developed within the EPA‘s National Center for Computational Toxicology. As part of this work we have attempted to disentangle the influence of the quality versus quantity of data based on the Syracuse PHYSPROP database partly used by EPISuite software. We will review our automated approaches to examining key datasets related to the EPISuite data to validate across chemical structure representations (e.g., mol file and SMILES) and identifiers (chemical names and registry numbers) and approaches to standardize data into QSAR-ready formats prior to modeling procedures. Our efforts to quantify and segregate data into quality categories has allowed us to evaluate the resulting models that can be developed from these data slices and to quantify to what extent efforts developing high-quality datasets have the expected pay-off in terms of predicting performance. The most accur
Omics Metadata Management Software (OMMS).
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.
Gama-Castro, Socorro; Salgado, Heladia; Santos-Zavaleta, Alberto; Ledezma-Tejeida, Daniela; Muñiz-Rascado, Luis; García-Sotelo, Jair Santiago; Alquicira-Hernández, Kevin; Martínez-Flores, Irma; Pannier, Lucia; Castro-Mondragón, Jaime Abraham; Medina-Rivera, Alejandra; Solano-Lira, Hilda; Bonavides-Martínez, César; Pérez-Rueda, Ernesto; Alquicira-Hernández, Shirley; Porrón-Sotelo, Liliana; López-Fuentes, Alejandra; Hernández-Koutoucheva, Anastasia; Del Moral-Chávez, Víctor; Rinaldi, Fabio; Collado-Vides, Julio
2016-01-04
RegulonDB (http://regulondb.ccg.unam.mx) is one of the most useful and important resources on bacterial gene regulation,as it integrates the scattered scientific knowledge of the best-characterized organism, Escherichia coli K-12, in a database that organizes large amounts of data. Its electronic format enables researchers to compare their results with the legacy of previous knowledge and supports bioinformatics tools and model building. Here, we summarize our progress with RegulonDB since our last Nucleic Acids Research publication describing RegulonDB, in 2013. In addition to maintaining curation up-to-date, we report a collection of 232 interactions with small RNAs affecting 192 genes, and the complete repertoire of 189 Elementary Genetic Sensory-Response units (GENSOR units), integrating the signal, regulatory interactions, and metabolic pathways they govern. These additions represent major progress to a higher level of understanding of regulated processes. We have updated the computationally predicted transcription factors, which total 304 (184 with experimental evidence and 120 from computational predictions); we updated our position-weight matrices and have included tools for clustering them in evolutionary families. We describe our semiautomatic strategy to accelerate curation, including datasets from high-throughput experiments, a novel coexpression distance to search for 'neighborhood' genes to known operons and regulons, and computational developments. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Omics Metadata Management Software (OMMS)
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554
GapBlaster-A Graphical Gap Filler for Prokaryote Genomes.
de Sá, Pablo H C G; Miranda, Fábio; Veras, Adonney; de Melo, Diego Magalhães; Soares, Siomar; Pinheiro, Kenny; Guimarães, Luis; Azevedo, Vasco; Silva, Artur; Ramos, Rommel T J
2016-01-01
The advent of NGS (Next Generation Sequencing) technologies has resulted in an exponential increase in the number of complete genomes available in biological databases. This advance has allowed the development of several computational tools enabling analyses of large amounts of data in each of the various steps, from processing and quality filtering to gap filling and manual curation. The tools developed for gap closure are very useful as they result in more complete genomes, which will influence downstream analyses of genomic plasticity and comparative genomics. However, the gap filling step remains a challenge for genome assembly, often requiring manual intervention. Here, we present GapBlaster, a graphical application to evaluate and close gaps. GapBlaster was developed via Java programming language. The software uses contigs obtained in the assembly of the genome to perform an alignment against a draft of the genome/scaffold, using BLAST or Mummer to close gaps. Then, all identified alignments of contigs that extend through the gaps in the draft sequence are presented to the user for further evaluation via the GapBlaster graphical interface. GapBlaster presents significant results compared to other similar software and has the advantage of offering a graphical interface for manual curation of the gaps. GapBlaster program, the user guide and the test datasets are freely available at https://sourceforge.net/projects/gapblaster2015/. It requires Sun JDK 8 and Blast or Mummer.
Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay
2018-01-01
Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462
The Role of the Curator in Modern Hospitals: A Transcontinental Perspective.
Moss, Hilary; O'Neill, Desmond
2016-12-13
This paper explores the role of the curator in hospitals. The arts play a significant role in every society; however, recent studies indicate a neglect of the aesthetic environment of healthcare. This international study explores the complex role of the curator in modern hospitals. Semi-structured interviews were conducted with ten arts specialists in hospitals across five countries and three continents for a qualitative, phenomenological study. Five themes arose from the data: (1) Patient involvement and influence on the arts programme in hospital (2) Understanding the role of the curator in hospital (3) Influences on arts programming in hospital (4) Types of arts programmes (5) Limitations to effective curation in hospital. Recommendations arising from the research included recognition of the specialised role of the curator in hospitals; building positive links with clinical staff to effect positive hospital arts programmes and increasing formal involvement of patients in arts planning in hospital. Hospital curation can be a vibrant arena for arts development, and the role of the hospital curator is a ground-breaking specialist role that can bring benefits to hospital life. The role of curator in hospital deserves to be supported and developed by both the arts and health sectors.
Objective, Quantitative, Data-Driven Assessment of Chemical Probes.
Antolin, Albert A; Tym, Joseph E; Komianou, Angeliki; Collins, Ian; Workman, Paul; Al-Lazikani, Bissan
2018-02-15
Chemical probes are essential tools for understanding biological systems and for target validation, yet selecting probes for biomedical research is rarely based on objective assessment of all potential compounds. Here, we describe the Probe Miner: Chemical Probes Objective Assessment resource, capitalizing on the plethora of public medicinal chemistry data to empower quantitative, objective, data-driven evaluation of chemical probes. We assess >1.8 million compounds for their suitability as chemical tools against 2,220 human targets and dissect the biases and limitations encountered. Probe Miner represents a valuable resource to aid the identification of potential chemical probes, particularly when used alongside expert curation. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
PathScore: a web tool for identifying altered pathways in cancer data.
Gaffney, Stephen G; Townsend, Jeffrey P
2016-12-01
PathScore quantifies the level of enrichment of somatic mutations within curated pathways, applying a novel approach that identifies pathways enriched across patients. The application provides several user-friendly, interactive graphic interfaces for data exploration, including tools for comparing pathway effect sizes, significance, gene-set overlap and enrichment differences between projects. Web application available at pathscore.publichealth.yale.edu. Site implemented in Python and MySQL, with all major browsers supported. Source code available at: github.com/sggaffney/pathscore with a GPLv3 license. stephen.gaffney@yale.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An Excellent Pilot Model for the Korean Air Force.
1988-12-01
Address i cirv, state, and ZIP c^-de 10 Source of Funding Numbers Proeram Element No Protect No Task No Work Inn Accession N i Title...undergraduate pilots in the Korean Air Force. ui ^t Accession Fcr • - • - ORAJcI i; U": it : • in .’ H J . m _ . ; . • fr A...Squares (OLS) method. Table 22. RESULTS OF THE REGRESSION MODEL Variables Coefficient Prob>|t| Beta Coefficient Intercept 516. SS7 (56.2S6) APT
2011-09-20
significantly alter the regulation of the mtrCDE operon and result in increased resistance to anti- microbials. IMPORTANCE Gonorrhea is the second most...causative agent of the sexually trans-mitted infection gonorrhea , is a Gram-negative diplococcus and is strictly a human pathogen. Clinical isolates of N...produced in response to experimental human gonorrhea . J. Infect. Dis. 172:186 –191. 34. Schmidt KA, et al. 2001. Experimental gonococcal urethritis and
1980-12-01
I AD-A093 642 ROCKWELL INTERNATIONAL ANAHEIM CA AUOEISSTAE-T F/S V/2 I BU13LE MEMORY MODULE. (U) DEC 80 0 0 BOHNING. F J BECKER NASI -14174...Cde under Contract NASI -14174 Dist’m/o National Aeronautics and Space Administration Scientific and Technical Information BranchA 1980 J Approvod ior...9/ BUBBLE M MORY MODULE.(U) DEC 80 0 BHNING, F J BECKER NASI -14174 NCLASSIFIED CB-569/201 NASA-CR-3380 ML22-fllfllf ll l ff mmlmmmmm.l®fmmM EEmmEI
Preparing to Receive and Handle Martian Samples When They Arrive on Earth
NASA Technical Reports Server (NTRS)
McCubbin, Francis M.
2017-01-01
The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F+ derivative NPR 'Curation of Extraterrestrial Materials', JSC is charged with 'The curation of all extraterrestrial material under NASA control, including future NASA missions. 'The Directive goes on to define Curation as including'...documentation, preservation, preparation, and distribution of samples for research, education, and public outreach."
The curation of genetic variants: difficulties and possible solutions.
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-12-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.
The Curation of Genetic Variants: Difficulties and Possible Solutions
Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar
2012-01-01
The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. PMID:23317699
Braun, Bremen L.; Schott, David A.; Portwood, II, John L.; Schaeffer, Mary L.; Harper, Lisa C.; Gardiner, Jack M.; Cannon, Ethalinda K.; Andorf, Carson M.
2017-01-01
Abstract The Maize Genetics and Genomics Database (MaizeGDB) team prepared a survey to identify breeders’ needs for visualizing pedigrees, diversity data and haplotypes in order to prioritize tool development and curation efforts at MaizeGDB. The survey was distributed to the maize research community on behalf of the Maize Genetics Executive Committee in Summer 2015. The survey garnered 48 responses from maize researchers, of which more than half were self-identified as breeders. The survey showed that the maize researchers considered their top priorities for visualization as: (i) displaying single nucleotide polymorphisms in a given region for a given list of lines, (ii) showing haplotypes for a given list of lines and (iii) presenting pedigree relationships visually. The survey also asked which populations would be most useful to display. The following two populations were on top of the list: (i) 3000 publicly available maize inbred lines used in Romay et al. (Comprehensive genotyping of the USA national maize inbred seed bank. Genome Biol, 2013;14:R55) and (ii) maize lines with expired Plant Variety Protection Act (ex-PVP) certificates. Driven by this strong stakeholder input, MaizeGDB staff are currently working in four areas to improve its interface and web-based tools: (i) presenting immediate progenies of currently available stocks at the MaizeGDB Stock pages, (ii) displaying the most recent ex-PVP lines described in the Germplasm Resources Information Network (GRIN) on the MaizeGDB Stock pages, (iii) developing network views of pedigree relationships and (iv) visualizing genotypes from SNP-based diversity datasets. These survey results can help other biological databases to direct their efforts according to user preferences as they serve similar types of data sets for their communities. Database URL: https://www.maizegdb.org PMID:28605768
Reconstruction of metabolic pathways for the cattle genome
Seo, Seongwon; Lewin, Harris A
2009-01-01
Background Metabolic reconstruction of microbial, plant and animal genomes is a necessary step toward understanding the evolutionary origins of metabolism and species-specific adaptive traits. The aims of this study were to reconstruct conserved metabolic pathways in the cattle genome and to identify metabolic pathways with missing genes and proteins. The MetaCyc database and PathwayTools software suite were chosen for this work because they are widely used and easy to implement. Results An amalgamated cattle genome database was created using the NCBI and Ensembl cattle genome databases (based on build 3.1) as data sources. PathwayTools was used to create a cattle-specific pathway genome database, which was followed by comprehensive manual curation for the reconstruction of metabolic pathways. The curated database, CattleCyc 1.0, consists of 217 metabolic pathways. A total of 64 mammalian-specific metabolic pathways were modified from the reference pathways in MetaCyc, and two pathways previously identified but missing from MetaCyc were added. Comparative analysis of metabolic pathways revealed the absence of mammalian genes for 22 metabolic enzymes whose activity was reported in the literature. We also identified six human metabolic protein-coding genes for which the cattle ortholog is missing from the sequence assembly. Conclusion CattleCyc is a powerful tool for understanding the biology of ruminants and other cetartiodactyl species. In addition, the approach used to develop CattleCyc provides a framework for the metabolic reconstruction of other newly sequenced mammalian genomes. It is clear that metabolic pathway analysis strongly reflects the quality of the underlying genome annotations. Thus, having well-annotated genomes from many mammalian species hosted in BioCyc will facilitate the comparative analysis of metabolic pathways among different species and a systems approach to comparative physiology. PMID:19284618
"Small" data in a big data world: archiving terrestrial ecology data at ORNL DAAC
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Beaty, T.; Boyer, A.; Deb, D.; Hook, L.; Shrestha, R.; Thornton, M.; Virdi, M.; Wei, Y.; Wright, D.
2016-12-01
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC http://daac.ornl.gov), a NASA-funded data center, archives a diverse collection of terrestrial biogeochemistry and ecological dynamics observations and models in support of NASA's Earth Science program. The ORNL DAAC has been addressing the increasing challenge of publishing diverse small data products into an online archive while dealing with the enhanced need for integration and availability of these data to address big science questions. This paper will show examples of "small" diverse data holdings - ranging from the Daymet model output data to site-based soil moisture observation data. We define "small" by the data volume of these data products compared to petabyte scale observations. We will highlight the use of tools and services for visualizing diverse data holdings and subsetting services such as the MODIS land products subsets tool (at ORNL DAAC) that provides big MODIS data in small chunks. Digital Object Identifiers (DOI) and data citations have enhanced the availability of data. The challenge faced by data publishers now is to deal with the increased number of publishable data products and most importantly the difficulties of publishing small diverse data products into an online archive. This paper will also present our experiences designing a data curation system for these types of data. The characteristics of these data will be examined and their scientific value will be demonstrated via data citation metrics. We will present case studies of leveraging specialized tools and services that have enabled small data sets to realize their "big" scientific potential. Overall, we will provide a holistic view of the challenges and potential of small diverse terrestrial ecology data sets from data curation to distribution.
The Astromaterials X-Ray Computed Tomography Laboratory at Johnson Space Center
NASA Technical Reports Server (NTRS)
Zeigler, R. A.; Coleff, D. M.; McCubbin, F. M.
2017-01-01
The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (hereafter JSC curation) is the past, present, and future home of all of NASA's astromaterials sample collections. JSC curation currently houses all or part of nine different sample collections: (1) Apollo samples (1969), (2) Lunar samples (1972), (3) Antarctic meteorites (1976), (4) Cosmic Dust particles (1981), (5) Microparticle Impact Collection (1985), (6) Genesis solar wind atoms (2004); (7) Stardust comet Wild-2 particles (2006), (8) Stardust interstellar particles (2006), and (9) Hayabusa asteroid Itokawa particles (2010). Each sample collection is housed in a dedicated clean room, or suite of clean rooms, that is tailored to the requirements of that sample collection. Our primary goals are to maintain the long-term integrity of the samples and ensure that the samples are distributed for scientific study in a fair, timely, and responsible manner, thus maximizing the return on each sample. Part of the curation process is planning for the future, and we also perform fundamental research in advanced curation initiatives. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of sample collections, or getting new results from existing sample collections [2]. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, and curation of organically- and biologically-sensitive samples. As part of these advanced curation efforts we are augmenting our analytical facilities as well. A micro X-Ray computed tomography (micro-XCT) laboratory dedicated to the study of astromaterials will be coming online this spring within the JSC Curation office, and we plan to add additional facilities that will enable nondestructive (or minimally-destructive) analyses of astromaterials in the near future (micro-XRF, confocal imaging Raman Spectroscopy). These facilities will be available to: (1) develop sample handling and storage techniques for future sample return missions; (2) be utilized by PET for future sample return missions; (3) be used for retroactive PET (Positron Emission Tomography)-style analyses of our existing collections; and (4) for periodic assessments of the existing sample collections. Here we describe the new micro-XCT system, as well as some of the ongoing or anticipated applications of the instrument.
HPIDB 2.0: a curated database for host–pathogen interactions
Ammari, Mais G.; Gresham, Cathy R.; McCarthy, Fiona M.; Nanduri, Bindu
2016-01-01
Identification and analysis of host–pathogen interactions (HPI) is essential to study infectious diseases. However, HPI data are sparse in existing molecular interaction databases, especially for agricultural host–pathogen systems. Therefore, resources that annotate, predict and display the HPI that underpin infectious diseases are critical for developing novel intervention strategies. HPIDB 2.0 (http://www.agbase.msstate.edu/hpi/main.html) is a resource for HPI data, and contains 45, 238 manually curated entries in the current release. Since the first description of the database in 2010, multiple enhancements to HPIDB data and interface services were made that are described here. Notably, HPIDB 2.0 now provides targeted biocuration of molecular interaction data. As a member of the International Molecular Exchange consortium, annotations provided by HPIDB 2.0 curators meet community standards to provide detailed contextual experimental information and facilitate data sharing. Moreover, HPIDB 2.0 provides access to rapidly available community annotations that capture minimum molecular interaction information to address immediate researcher needs for HPI network analysis. In addition to curation, HPIDB 2.0 integrates HPI from existing external sources and contains tools to infer additional HPI where annotated data are scarce. Compared to other interaction databases, our data collection approach ensures HPIDB 2.0 users access the most comprehensive HPI data from a wide range of pathogens and their hosts (594 pathogen and 70 host species, as of February 2016). Improvements also include enhanced search capacity, addition of Gene Ontology functional information, and implementation of network visualization. The changes made to HPIDB 2.0 content and interface ensure that users, especially agricultural researchers, are able to easily access and analyse high quality, comprehensive HPI data. All HPIDB 2.0 data are updated regularly, are publically available for direct download, and are disseminated to other molecular interaction resources. Database URL: http://www.agbase.msstate.edu/hpi/main.html PMID:27374121
Pathway Tools version 19.0 update: software for pathway/genome informatics and systems biology.
Karp, Peter D; Latendresse, Mario; Paley, Suzanne M; Krummenacker, Markus; Ong, Quang D; Billington, Richard; Kothari, Anamika; Weaver, Daniel; Lee, Thomas; Subhraveti, Pallavi; Spaulding, Aaron; Fulcher, Carol; Keseler, Ingrid M; Caspi, Ron
2016-09-01
Pathway Tools is a bioinformatics software environment with a broad set of capabilities. The software provides genome-informatics tools such as a genome browser, sequence alignments, a genome-variant analyzer and comparative-genomics operations. It offers metabolic-informatics tools, such as metabolic reconstruction, quantitative metabolic modeling, prediction of reaction atom mappings and metabolic route search. Pathway Tools also provides regulatory-informatics tools, such as the ability to represent and visualize a wide range of regulatory interactions. This article outlines the advances in Pathway Tools in the past 5 years. Major additions include components for metabolic modeling, metabolic route search, computation of atom mappings and estimation of compound Gibbs free energies of formation; addition of editors for signaling pathways, for genome sequences and for cellular architecture; storage of gene essentiality data and phenotype data; display of multiple alignments, and of signaling and electron-transport pathways; and development of Python and web-services application programming interfaces. Scientists around the world have created more than 9800 Pathway/Genome Databases by using Pathway Tools, many of which are curated databases for important model organisms. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The SwissLipids knowledgebase for lipid biology
Liechti, Robin; Hyka-Nouspikel, Nevila; Niknejad, Anne; Gleizes, Anne; Götz, Lou; Kuznetsov, Dmitry; David, Fabrice P.A.; van der Goot, F. Gisou; Riezman, Howard; Bougueleret, Lydie; Xenarios, Ioannis; Bridge, Alan
2015-01-01
Motivation: Lipids are a large and diverse group of biological molecules with roles in membrane formation, energy storage and signaling. Cellular lipidomes may contain tens of thousands of structures, a staggering degree of complexity whose significance is not yet fully understood. High-throughput mass spectrometry-based platforms provide a means to study this complexity, but the interpretation of lipidomic data and its integration with prior knowledge of lipid biology suffers from a lack of appropriate tools to manage the data and extract knowledge from it. Results: To facilitate the description and exploration of lipidomic data and its integration with prior biological knowledge, we have developed a knowledge resource for lipids and their biology—SwissLipids. SwissLipids provides curated knowledge of lipid structures and metabolism which is used to generate an in silico library of feasible lipid structures. These are arranged in a hierarchical classification that links mass spectrometry analytical outputs to all possible lipid structures, metabolic reactions and enzymes. SwissLipids provides a reference namespace for lipidomic data publication, data exploration and hypothesis generation. The current version of SwissLipids includes over 244 000 known and theoretically possible lipid structures, over 800 proteins, and curated links to published knowledge from over 620 peer-reviewed publications. We are continually updating the SwissLipids hierarchy with new lipid categories and new expert curated knowledge. Availability: SwissLipids is freely available at http://www.swisslipids.org/. Contact: alan.bridge@isb-sib.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25943471
Supporting Data Stewardship Throughout the Data Life Cycle in the Solid Earth Sciences
NASA Astrophysics Data System (ADS)
Ferrini, V.; Lehnert, K. A.; Carbotte, S. M.; Hsu, L.
2013-12-01
Stewardship of scientific data is fundamental to enabling new data-driven research, and ensures preservation, accessibility, and quality of the data, yet researchers, especially in disciplines that typically generate and use small, but complex, heterogeneous, and unstructured datasets are challenged to fulfill increasing demands of properly managing their data. The IEDA Data Facility (www.iedadata.org) provides tools and services that support data stewardship throughout the full life cycle of observational data in the solid earth sciences, with a focus on the data management needs of individual researchers. IEDA builds upon and brings together over a decade of development and experiences of its component data systems, the Marine Geoscience Data System (MGDS, www.marine-geo.org) and EarthChem (www.earthchem.org). IEDA services include domain-focused data curation and synthesis, tools for data discovery, access, visualization and analysis, as well as investigator support services that include tools for data contribution, data publication services, and data compliance support. IEDA data synthesis efforts (e.g. PetDB and Global Multi-Resolution Topography (GMRT) Synthesis) focus on data integration and analysis while emphasizing provenance and attribution. IEDA's domain-focused data catalogs (e.g. MGDS and EarthChem Library) provide access to metadata-rich long-tail data complemented by extensive metadata including attribution information and links to related publications. IEDA's visualization and analysis tools (e.g. GeoMapApp) broaden access to earth science data for domain specialist and non-specialists alike, facilitating both interdisciplinary research and education and outreach efforts. As a disciplinary data repository, a key role IEDA plays is to coordinate with its user community and to bridge the requirements and standards for data curation with both the evolving needs of its science community and emerging technologies. Development of IEDA tools and services is based first and foremost on the scientific needs of its user community. As data stewardship becomes a more integral component of the scientific workflow, IEDA investigator support services (e.g. Data Management Plan Tool and Data Compliance Reporting Tool) continue to evolve with the goal of lessening the 'burden' of data management for individual investigators by increasing awareness and facilitating the adoption of data management practices. We will highlight a variety of IEDA system components that support investigators throughout the data life cycle, and will discuss lessons learned and future directions.
PAZAR: a framework for collection and dissemination of cis-regulatory sequence annotation
Portales-Casamar, Elodie; Kirov, Stefan; Lim, Jonathan; Lithwick, Stuart; Swanson, Magdalena I; Ticoll, Amy; Snoddy, Jay; Wasserman, Wyeth W
2007-01-01
PAZAR is an open-access and open-source database of transcription factor and regulatory sequence annotation with associated web interface and programming tools for data submission and extraction. Curated boutique data collections can be maintained and disseminated through the unified schema of the mall-like PAZAR repository. The Pleiades Promoter Project collection of brain-linked regulatory sequences is introduced to demonstrate the depth of annotation possible within PAZAR. PAZAR, located at , is open for business. PMID:17916232
PAZAR: a framework for collection and dissemination of cis-regulatory sequence annotation.
Portales-Casamar, Elodie; Kirov, Stefan; Lim, Jonathan; Lithwick, Stuart; Swanson, Magdalena I; Ticoll, Amy; Snoddy, Jay; Wasserman, Wyeth W
2007-01-01
PAZAR is an open-access and open-source database of transcription factor and regulatory sequence annotation with associated web interface and programming tools for data submission and extraction. Curated boutique data collections can be maintained and disseminated through the unified schema of the mall-like PAZAR repository. The Pleiades Promoter Project collection of brain-linked regulatory sequences is introduced to demonstrate the depth of annotation possible within PAZAR. PAZAR, located at http://www.pazar.info, is open for business.
A Website for Astronomy Education and Outreach
NASA Astrophysics Data System (ADS)
Impey, C.; Danehy, A.
2017-09-01
Teach Astronomy is a free, open access website designed for formal and informal learners of astronomy. The site features: an online textbook complete with quiz questions and a glossary; over ten thousand images; a curated collection of the astronomy articles in Wikipedia; a complete video lecture course; a video Frequently Asked Questions tool; and other materials provided by content partners. Clustering algorithms and an interactive visual interface allow users to browse related content. This article reviews the features of the website and how it can be used.
Haas, Brian J; Salzberg, Steven L; Zhu, Wei; Pertea, Mihaela; Allen, Jonathan E; Orvis, Joshua; White, Owen; Buell, C Robin; Wortman, Jennifer R
2008-01-01
EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation. PMID:18190707
The Reactome pathway knowledgebase
Croft, David; Mundo, Antonio Fabregat; Haw, Robin; Milacic, Marija; Weiser, Joel; Wu, Guanming; Caudy, Michael; Garapati, Phani; Gillespie, Marc; Kamdar, Maulik R.; Jassal, Bijay; Jupe, Steven; Matthews, Lisa; May, Bruce; Palatnik, Stanislav; Rothfels, Karen; Shamovsky, Veronica; Song, Heeyeon; Williams, Mark; Birney, Ewan; Hermjakob, Henning; Stein, Lincoln; D'Eustachio, Peter
2014-01-01
Reactome (http://www.reactome.org) is a manually curated open-source open-data resource of human pathways and reactions. The current version 46 describes 7088 human proteins (34% of the predicted human proteome), participating in 6744 reactions based on data extracted from 15 107 research publications with PubMed links. The Reactome Web site and analysis tool set have been completely redesigned to increase speed, flexibility and user friendliness. The data model has been extended to support annotation of disease processes due to infectious agents and to mutation. PMID:24243840
The Reactome pathway knowledgebase.
Croft, David; Mundo, Antonio Fabregat; Haw, Robin; Milacic, Marija; Weiser, Joel; Wu, Guanming; Caudy, Michael; Garapati, Phani; Gillespie, Marc; Kamdar, Maulik R; Jassal, Bijay; Jupe, Steven; Matthews, Lisa; May, Bruce; Palatnik, Stanislav; Rothfels, Karen; Shamovsky, Veronica; Song, Heeyeon; Williams, Mark; Birney, Ewan; Hermjakob, Henning; Stein, Lincoln; D'Eustachio, Peter
2014-01-01
Reactome (http://www.reactome.org) is a manually curated open-source open-data resource of human pathways and reactions. The current version 46 describes 7088 human proteins (34% of the predicted human proteome), participating in 6744 reactions based on data extracted from 15 107 research publications with PubMed links. The Reactome Web site and analysis tool set have been completely redesigned to increase speed, flexibility and user friendliness. The data model has been extended to support annotation of disease processes due to infectious agents and to mutation.
Sinaci, A Anil; Laleci Erturkmen, Gokce B
2013-10-01
In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. Copyright © 2013 Elsevier Inc. All rights reserved.
Results of endocapsular phacofracture debulking of hard cataracts.
Davison, James A
2015-01-01
To present a phacoemulsification technique for hard cataracts and compare postoperative results using two different ultrasonic tip motions during quadrant removal. A phacoemulsification technique which employs in situ fracture and endocapsular debulking for hard cataracts is presented. The prospective study included 56 consecutive cases of hard cataract (LOCS III NC [Lens Opacification Classification System III, nuclear color], average 4.26), which were operated using the Infiniti machine and the Partial Kelman tip. Longitudinal tip movement was used for sculpting for all cases which were randomized to receive longitudinal or torsional/interjected longitudinal (Intelligent Phaco [IP]) strategies for quadrant removal. Measurements included cumulative dissipated energy (CDE), 3 months postoperative surgically induced astigmatism (SIA), and corneal endothelial cell density (ECD) losses. No complications were recorded in any of the cases. Respective overall and longitudinal vs IP means were as follows: CDE, 51.6±15.6 and 55.7±15.5 vs 48.6±15.1; SIA, 0.36±0.2 D and 0.4±0.2 D vs 0.3±0.2 D; and mean ECD loss, 4.1%±10.8% and 5.9%±13.4% vs 2.7%±7.8%. The differences between longitudinal and IP were not significant for any of the three categories. The endocapsular phacofracture debulking technique is safe and effective for phacoemulsification of hard cataracts using longitudinal or torsional IP strategies for quadrant removal with the Infiniti machine and Partial Kelman tip.
Effects of Pisha sandstone content on solute transport in a sandy soil.
Zhen, Qing; Zheng, Jiyong; He, Honghua; Han, Fengpeng; Zhang, Xingchang
2016-02-01
In sandy soil, water, nutrients and even pollutants are easily leaching to deeper layers. The objective of this study was to assess the effects of Pisha sandstone on soil solute transport in a sandy soil. The miscible displacement technique was used to obtain breakthrough curves (BTCs) of Br(-) as an inert non-adsorbed tracer and Na(+) as an adsorbed tracer. The incorporation of Pisha sandstone into sandy soil was able to prevent the early breakthrough of both tracers by decreasing the saturated hydraulic conductivity compared to the controlled sandy soil column, and the impeding effects increased with Pisha sandstone content. The BTCs of Br(-) were accurately described by both the convection-dispersion equation (CDE) and the two-region model (T-R), and the T-R model fitted the experimental data slightly better than the CDE. The two-site nonequilibrium model (T-S) accurately fit the Na(+) transport data. Pisha sandstone impeded the breakthrough of Na(+) not only by decreasing the saturated hydraulic conductivity but also by increasing the adsorption capacity of the soil. The measured CEC values of Pisha sandstone were up to 11 times larger than those of the sandy soil. The retardation factors (R) determined by the T-S model increased with increasing Pisha sandstone content, and the partition coefficient (K(d)) showed a similar trend to R. According to the results of this study, Pisha sandstone can successfully impede solute transport in a sandy soil column. Copyright © 2015 Elsevier Ltd. All rights reserved.
Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.
Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J
2014-01-01
Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.
A Window to the World: Lessons Learned from NASA's Collaborative Metadata Curation Effort
NASA Astrophysics Data System (ADS)
Bugbee, K.; Dixon, V.; Baynes, K.; Shum, D.; le Roux, J.; Ramachandran, R.
2017-12-01
Well written descriptive metadata adds value to data by making data easier to discover as well as increases the use of data by providing the context or appropriateness of use. While many data centers acknowledge the importance of correct, consistent and complete metadata, allocating resources to curate existing metadata is often difficult. To lower resource costs, many data centers seek guidance on best practices for curating metadata but struggle to identify those recommendations. In order to assist data centers in curating metadata and to also develop best practices for creating and maintaining metadata, NASA has formed a collaborative effort to improve the Earth Observing System Data and Information System (EOSDIS) metadata in the Common Metadata Repository (CMR). This effort has taken significant steps in building consensus around metadata curation best practices. However, this effort has also revealed gaps in EOSDIS enterprise policies and procedures within the core metadata curation task. This presentation will explore the mechanisms used for building consensus on metadata curation, the gaps identified in policies and procedures, the lessons learned from collaborating with both the data centers and metadata curation teams, and the proposed next steps for the future.
Aggregation Tool to Create Curated Data albums to Support Disaster Recovery and Response
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Despite advances in science and technology of prediction and simulation of natural hazards, losses incurred due to natural disasters keep growing every year. Natural disasters cause more economic losses as compared to anthropogenic disasters. Economic losses due to natural hazards are estimated to be around $6-$10 billion dollars annually for the U.S. and this number keeps increasing every year. This increase has been attributed to population growth and migration to more hazard prone locations such as coasts. As this trend continues, in concert with shifts in weather patterns caused by climate change, it is anticipated that losses associated with natural disasters will keep growing substantially. One of challenges disaster response and recovery analysts face is to quickly find, access and utilize a vast variety of relevant geospatial data collected by different federal agencies such as DoD, NASA, NOAA, EPA, USGS etc. Some examples of these data sets include high spatio-temporal resolution multi/hyperspectral satellite imagery, model prediction outputs from weather models, latest radar scans, measurements from an array of sensor networks such as Integrated Ocean Observing System etc. More often analysts may be familiar with limited, but specific datasets and are often unaware of or unfamiliar with a large quantity of other useful resources. Finding airborne or satellite data useful to a natural disaster event often requires a time consuming search through web pages and data archives. Additional information related to damages, deaths, and injuries requires extensive online searches for news reports and official report summaries. An analyst must also sift through vast amounts of potentially useful digital information captured by the general public such as geo-tagged photos, videos and real time damage updates within twitter feeds. Collecting and aggregating these information fragments can provide useful information in assessing damage in real time and help direct recovery efforts. The search process for the analyst could be made much more efficient and productive if a tool could go beyond a typical search engine and provide not just links to web sites but actual links to specific data relevant to the natural disaster, parse unstructured reports for useful information nuggets, as well as gather other related reports, summaries, news stories, and images. This presentation will describe a semantic aggregation tool developed to address similar problem for Earth Science researchers. This tool provides automated curation, and creates "Data Albums" to support case studies. The generated "Data Albums" are compiled collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; information about the event contained in news reports, and images or videos to supplement research analysis. An ontology-based relevancy-ranking algorithm drives the curation of relevant data sets for a given event. This tool is now being used to generate a catalog of Hurricane Case Studies at Global Hydrology Resource Center (GHRC), one of NASA's Distribute Active Archive Centers. Another instance of the Data Albums tool is currently being created in collaboration with NASA/MSFC's SPoRT Center, which conducts research on unique NASA products and capabilities that can be transitioned to the operational community to solve forecast problems. This new instance focuses on severe weather to support SPoRT researchers in their model evaluation studies
Howe, E.A.; de Souza, A.; Lahr, D.L.; Chatwin, S.; Montgomery, P.; Alexander, B.R.; Nguyen, D.-T.; Cruz, Y.; Stonich, D.A.; Walzer, G.; Rose, J.T.; Picard, S.C.; Liu, Z.; Rose, J.N.; Xiang, X.; Asiedu, J.; Durkin, D.; Levine, J.; Yang, J.J.; Schürer, S.C.; Braisted, J.C.; Southall, N.; Southern, M.R.; Chung, T.D.Y.; Brudz, S.; Tanega, C.; Schreiber, S.L.; Bittker, J.A.; Guha, R.; Clemons, P.A.
2015-01-01
BARD, the BioAssay Research Database (https://bard.nih.gov/) is a public database and suite of tools developed to provide access to bioassay data produced by the NIH Molecular Libraries Program (MLP). Data from 631 MLP projects were migrated to a new structured vocabulary designed to capture bioassay data in a formalized manner, with particular emphasis placed on the description of assay protocols. New data can be submitted to BARD with a user-friendly set of tools that assist in the creation of appropriately formatted datasets and assay definitions. Data published through the BARD application program interface (API) can be accessed by researchers using web-based query tools or a desktop client. Third-party developers wishing to create new tools can use the API to produce stand-alone tools or new plug-ins that can be integrated into BARD. The entire BARD suite of tools therefore supports three classes of researcher: those who wish to publish data, those who wish to mine data for testable hypotheses, and those in the developer community who wish to build tools that leverage this carefully curated chemical biology resource. PMID:25477388
Lynx web services for annotations and systems analysis of multi-gene disorders.
Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia
2014-07-01
Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
NASA Astrophysics Data System (ADS)
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return, Lunar South Pole-Aitken Basin Sample Return, and Comet Surface Sample Return, all of which were named in the NRC Planetary Science Decadal Survey 2013-2022. We are fully committed to pushing the boundaries of curation protocol as humans continue to push the boundaries of space exploration and sample return. However, to improve our ability to curate astromaterials collections of the future and to provide maximum protection to any returned samples, it is imperative that curation involvement commences at the time of mission conception. When curation involvement is at the ground floor of mission planning, it provides a mechanism by which the samples can be protected against project-level decisions that could undermine the scientific value of the re-turned samples. A notable example of one of the bene-fits of early curation involvement in mission planning is in the acquisition of contamination knowledge (CK). CK capture strategies are designed during the initial planning stages of a sample return mission, and they are to be implemented during all phases of the mission from assembly, test, and launch operations (ATLO), through cruise and mission operations, to the point of preliminary examination after Earth return. CK is captured by witness materials and coupons exposed to the contamination environment in the assembly labs and on the space craft during launch, cruise, and operations. These materials, along with any procedural blanks and returned flight-hardware, represent our CK capture for the returned samples and serves as a baseline from which analytical results can be vetted. Collection of CK is a critical part of being able to conduct and interpret data from organic geochemistry and biochemistry investigations of returned samples. The CK samples from a given mission are treated as part of the sample collection of that mission, hence they are part of the permanent archive that is maintained by the NASA curation Office. We are in the midst of collecting witness plates and coupons for the OSIRIS-REx mission, and we are in the planning stages for similar activities for the Mars 2020 rover mission, which is going to be the first step in a multi-stage campaign to return martian samples to Earth. Concluding Remarks: The return of every extraterrestrial sample is a scientific investment, and the CK samples and any procedural blanks represent an insurance policy against imperfections in the sample-collection and sample-return process. The curation facilities and personnel are the primary managers of that investment, and the scientific community, at large, is the beneficiary. The NASA Curation Office at JSC has the assigned task of maintaining the long-term integrity of all of NASA's astromaterials and ensuring that the samples are distributed for scientific study in a fair, timely, and responsible manner. It is only through this openness and global collaboration in the study of astromaterials that the return on our scientific investments can be maximized. For information on requesting samples and becoming part of the global study of astromaterials, please visit curator.jsc.nasa.gov References: [1] Mangus, S. & Larsen, W. (2004) NASA/CR-2004-208938, NASA, Washington, DC. [2] Allen, C. et al., (2011) Chemie Der Erde-Geochemistry, 71, 1-20. [3] McCubbin, F.M. et al., (2016) 47th LPSC #2668. [4] Zeigler, R.A. et al., (2014) 45th LPSC #2665.
Bioluminescence-based system for rapid detection of natural transformation.
Santala, Ville; Karp, Matti; Santala, Suvi
2016-07-01
Horizontal gene transfer plays a significant role in bacterial evolution and has major clinical importance. Thus, it is vital to understand the mechanisms and kinetics of genetic transformations. Natural transformation is the driving mechanism for horizontal gene transfer in diverse genera of bacteria. Our study introduces a simple and rapid method for the investigation of natural transformation. This highly sensitive system allows the detection of a transformation event directly from a bacterial population without any separation step or selection of cells. The system is based on the bacterial luciferase operon from Photorhabdus luminescens The studied molecular tools consist of the functional modules luxCDE and luxAB, which involve a replicative plasmid and an integrative gene cassette. A well-established host for bacterial genetic investigations, Acinetobacter baylyi ADP1, is used as the model bacterium. We show that natural transformation followed by homologous recombination or plasmid recircularization can be readily detected in both actively growing and static biofilm-like cultures, including very rare transformation events. The system allows the detection of natural transformation within 1 h of introducing sample DNA into the culture. The introduced method provides a convenient means to study the kinetics of natural transformation under variable conditions and perturbations. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The immune epitope database: a historical retrospective of the first decade.
Salimi, Nima; Fleri, Ward; Peters, Bjoern; Sette, Alessandro
2012-10-01
As the amount of biomedical information available in the literature continues to increase, databases that aggregate this information continue to grow in importance and scope. The population of databases can occur either through fully automated text mining approaches or through manual curation by human subject experts. We here report our experiences in populating the National Institute of Allergy and Infectious Diseases sponsored Immune Epitope Database and Analysis Resource (IEDB, http://iedb.org), which was created in 2003, and as of 2012 captures the epitope information from approximately 99% of all papers published to date that describe immune epitopes (with the exception of cancer and HIV data). This was achieved using a hybrid model based on automated document categorization and extensive human expert involvement. This task required automated scanning of over 22 million PubMed abstracts followed by classification and curation of over 13 000 references, including over 7000 infectious disease-related manuscripts, over 1000 allergy-related manuscripts, roughly 4000 related to autoimmunity, and 1000 transplant/alloantigen-related manuscripts. The IEDB curation involves an unprecedented level of detail, capturing for each paper the actual experiments performed for each different epitope structure. Key to enabling this process was the extensive use of ontologies to ensure rigorous and consistent data representation as well as interoperability with other bioinformatics resources, including the Protein Data Bank, Chemical Entities of Biological Interest, and the NIAID Bioinformatics Resource Centers. A growing fraction of the IEDB data derives from direct submissions by research groups engaged in epitope discovery, and is being facilitated by the implementation of novel data submission tools. The present explosion of information contained in biological databases demands effective query and display capabilities to optimize the user experience. Accordingly, the development of original ways to query the database, on the basis of ontologically driven hierarchical trees, and display of epitope data in aggregate in a biologically intuitive yet rigorous fashion is now at the forefront of the IEDB efforts. We also highlight advances made in the realm of epitope analysis and predictive tools available in the IEDB. © 2012 The Authors. Immunology © 2012 Blackwell Publishing Ltd.
Digital Curation of Earth Science Samples Starts in the Field
NASA Astrophysics Data System (ADS)
Lehnert, K. A.; Hsu, L.; Song, L.; Carter, M. R.
2014-12-01
Collection of physical samples in the field is an essential part of research in the Earth Sciences. Samples provide a basis for progress across many disciplines, from the study of global climate change now and over the Earth's history, to present and past biogeochemical cycles, to magmatic processes and mantle dynamics. The types of samples, methods of collection, and scope and scale of sampling campaigns are highly diverse, ranging from large-scale programs to drill rock and sediment cores on land, in lakes, and in the ocean, to environmental observation networks with continuous sampling, to single investigator or small team expeditions to remote areas around the globe or trips to local outcrops. Cyberinfrastructure for sample-related fieldwork needs to cater to the different needs of these diverse sampling activities, aligning with specific workflows, regional constraints such as connectivity or climate, and processing of samples. In general, digital tools should assist with capture and management of metadata about the sampling process (location, time, method) and the sample itself (type, dimension, context, images, etc.), management of the physical objects (e.g., sample labels with QR codes), and the seamless transfer of sample metadata to data systems and software relevant to the post-sampling data acquisition, data processing, and sample curation. In order to optimize CI capabilities for samples, tools and workflows need to adopt community-based standards and best practices for sample metadata, classification, identification and registration. This presentation will provide an overview and updates of several ongoing efforts that are relevant to the development of standards for digital sample management: the ODM2 project that has generated an information model for spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples, aligned with OGC's Observation & Measurements model (Horsburgh et al, AGU FM 2014); implementation of the IGSN (International Geo Sample Number) as a globally unique sample identifier via a distributed system of allocating agents and a central registry; and the EarthCube Research Coordination Network iSamplES (Internet of Samples in the Earth Sciences) that aims to improve sharing and curation of samples through the use of CI.
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
Singhal, Ayush; Leaman, Robert; Catlett, Natalie; Lemberger, Thomas; McEntyre, Johanna; Polson, Shawn; Xenarios, Ioannis; Arighi, Cecilia; Lu, Zhiyong
2016-01-01
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to the increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. Finally, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators. PMID:28025348
Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao
2018-01-01
Abstract Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. PMID:29069510
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
Singhal, Ayush; Leaman, Robert; Catlett, Natalie; ...
2016-12-26
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to themore » increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. In conclusion, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators.« less
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singhal, Ayush; Leaman, Robert; Catlett, Natalie
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to themore » increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. In conclusion, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators.« less
Névéol, Aurélie; Pereira, Suzanne; Kerdelhué, Gaetan; Dahamna, Badisse; Joubert, Michel; Darmoni, Stéfan J
2007-01-01
The growing number of resources to be indexed in the catalogue of online health resources in French (CISMeF) calls for curating strategies involving automatic indexing tools while maintaining the catalogue's high indexing quality standards. To develop a simple automatic tool that retrieves MeSH descriptors from documents titles. In parallel to research on advanced indexing methods, a bag-of-words tool was developed for timely inclusion in CISMeF's maintenance system. An evaluation was carried out on a corpus of 99 documents. The indexing sets retrieved by the automatic tool were compared to manual indexing based on the title and on the full text of resources. 58% of the major main headings were retrieved by the bag-of-words algorithm and the precision on main heading retrieval was 69%. Bag-of-words indexing has effectively been used on selected resources to be included in CISMeF since August 2006. Meanwhile, on going work aims at improving the current version of the tool.
Few-body model approach to the bound states of helium-like exotic three-body systems
NASA Astrophysics Data System (ADS)
Khan, Md A.
2016-10-01
In this paper, calculated energies of the lowest bound S-state of Coulomb three-body systems containing an electron (e —), a negatively charged muon (μ—) and a nucleus (NZ+) of charge number Z are reported. The 3-body relative wave function in the resulting Schrödinger equation is expanded in the complete set of hyperspherical harmonics (HH). Use of the orthonormality of HH leads to an infinite set of coupled differential equations (CDE) which are solved numerically to get the energy E.
Semiconductor/Insulator Films for Corrosion Protection.
1985-10-01
8217%, V -ED C~ SAVE AS RPP El Z)TC IjSERS I Unclassified V :.%- ES~YSB-D DA.22b TELEPH-ONE (Include 4rea Cde 2c JFFCE S’iMBOL V.Agarwaa DD FORM 1473, 4 VAP ...starting solution is aspirated through the injector nozzle using a high pressure nitrogen gas stream. The resultant mist is sprayed into the reactor where...approxi- mately 390-430 0C. The starting solution is aspirated with nitrogen and sprayed directly on the aluminum substrate. The height and angle of the
1987-08-01
THE DISPOSAL OF CNEM.. CU) GA TECHNOLOGIES INC SRN DIEGO CA A H SARSELL ET AL. RUG 97 GA-C- i @563 UNLRSS FIED S APEO-CDE-IS- 9 ?SIGDRAA±5-85-D-822...F/ 15/.3 NL I ihhhhhhhhhhhhlm I fflfflffllfllfllfllf smhhhhhhhhhhh ~1.02 U.,5 A I *Pig- FiLE copy CHEMICAL STOCKPILE DISPOSAL PROGRAM RISK ANALYSIS...vr~. ’ . - a ’ a’ ’- . ,I1 - .V [ N- VW; W UU V. , U .U : , r ,,, - . ..... . SECURITY CLASSIFICATION OF THIS PAGE IM : I omApproved
1996-12-09
some serum copper parameters in trained professional soccer players and control subjects. J. Sports Med. Phys. Fitness. 31:4123-416, 1991. 110. Ruz, M...WOMAN’S UNIVERSITY DEPARTMENT OF NUTRITION AND FOOD SCIENCES COLLEGE OF HEALTH SCIENCES BY KIMBERLY K. EDGREN, RD, BS, CDE DENTON, TEXAS DECEMBER 1996...of the requirements for the degree of Master of Science, with a major in Nutrition . Bett,/B. Alfor ,Ph.D., Major Professor We have read this thesis and
A Rich-Club Organization in Brain Ischemia Protein Interaction Network
Alawieh, Ali; Sabra, Zahraa; Sabra, Mohammed; Tomlinson, Stephen; Zaraket, Fadi A.
2015-01-01
Ischemic stroke involves multiple pathophysiological mechanisms with complex interactions. Efforts to decipher those mechanisms and understand the evolution of cerebral injury is key for developing successful interventions. In an innovative approach, we use literature mining, natural language processing and systems biology tools to construct, annotate and curate a brain ischemia interactome. The curated interactome includes proteins that are deregulated after cerebral ischemia in human and experimental stroke. Network analysis of the interactome revealed a rich-club organization indicating the presence of a densely interconnected hub structure of prominent contributors to disease pathogenesis. Functional annotation of the interactome uncovered prominent pathways and highlighted the critical role of the complement and coagulation cascade in the initiation and amplification of injury starting by activation of the rich-club. We performed an in-silico screen for putative interventions that have pleiotropic effects on rich-club components and we identified estrogen as a prominent candidate. Our findings show that complex network analysis of disease related interactomes may lead to a better understanding of pathogenic mechanisms and provide cost-effective and mechanism-based discovery of candidate therapeutics. PMID:26310627
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
The impact of comorbidity on cancer and its treatment.
Sarfati, Diana; Koczwara, Bogda; Jackson, Christopher
2016-07-01
Answer questions and earn CME/CNE Comorbidity is common among cancer patients and, with an aging population, is becoming more so. Comorbidity potentially affects the development, stage at diagnosis, treatment, and outcomes of people with cancer. Despite the intimate relationship between comorbidity and cancer, there is limited consensus on how to record, interpret, or manage comorbidity in the context of cancer, with the result that patients who have comorbidity are less likely to receive treatment with curative intent. Evidence in this area is lacking because of the frequent exclusion of patients with comorbidity from randomized controlled trials. There is evidence that some patients with comorbidity have potentially curative treatment unnecessarily modified, compromising optimal care. Patients with comorbidity have poorer survival, poorer quality of life, and higher health care costs. Strategies to address these issues include improving the evidence base for patients with comorbidity, further development of clinical tools to assist decision making, improved integration and coordination of care, and skill development for clinicians. CA Cancer J Clin 2016;66:337-350. © 2016 American Cancer Society. © 2016 American Cancer Society, Inc.
The NOAA OneStop System: From Well-Curated Metadata to Data Discovery
NASA Astrophysics Data System (ADS)
McQuinn, E.; Jakositz, A.; Caldwell, A.; Delk, Z.; Neufeld, D.; Shapiro, J.; Partee, R.; Milan, A.
2017-12-01
The NOAA OneStop project is a pathfinder in the realm of enabling users to search for, discover, and access NOAA data. As the project continues along its path to maturity, it has become evident that three areas are of utmost importance to its success in the Earth science community: ensuring quality metadata, building a robust and scalable backend architecture, and keeping the user interface simple to use. Why is this the case? Because, simply put, we are dealing with all aspects of a Big Data problem: large volumes of disparate data needing to be quickly and easily processed and retrieved. In this presentation we discuss the three key aspects of OneStop architecture and how development in each area must be done through cross-team collaboration in order to succeed. We cover aspects of the web-based user interface and OneStop API and how metadata curators and software engineers have worked together to continually iterate on an ever-improving data discovery tool meant to be used by a variety of users searching across a broad assortment of data types.
Diagnostic and therapeutic management of hepatocellular carcinoma
Bellissimo, Francesco; Pinzone, Marilia Rita; Cacopardo, Bruno; Nunnari, Giuseppe
2015-01-01
Hepatocellular carcinoma (HCC) is an increasing health problem, representing the second cause of cancer-related mortality worldwide. The major risk factor for HCC is cirrhosis. In developing countries, viral hepatitis represent the major risk factor, whereas in developed countries, the epidemic of obesity, diabetes and nonalcoholic steatohepatitis contribute to the observed increase in HCC incidence. Cirrhotic patients are recommended to undergo HCC surveillance by abdominal ultrasounds at 6-mo intervals. The current diagnostic algorithms for HCC rely on typical radiological hallmarks in dynamic contrast-enhanced imaging, while the use of α-fetoprotein as an independent tool for HCC surveillance is not recommended by current guidelines due to its low sensitivity and specificity. Early diagnosis is crucial for curative treatments. Surgical resection, radiofrequency ablation and liver transplantation are considered the cornerstones of curative therapy, while for patients with more advanced HCC recommended options include sorafenib and trans-arterial chemo-embolization. A multidisciplinary team, consisting of hepatologists, surgeons, radiologists, oncologists and pathologists, is fundamental for a correct management. In this paper, we review the diagnostic and therapeutic management of HCC, with a focus on the most recent evidences and recommendations from guidelines. PMID:26576088
Real-time estimation of wildfire perimeters from curated crowdsourcing.
Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin
2016-04-11
Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available "curated" crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires.
Rinchai, Darawan; Boughorbel, Sabri; Presnell, Scott; Quinn, Charlie; Chaussabel, Damien
2016-01-01
Systems-scale profiling approaches have become widely used in translational research settings. The resulting accumulation of large-scale datasets in public repositories represents a critical opportunity to promote insight and foster knowledge discovery. However, resources that can serve as an interface between biomedical researchers and such vast and heterogeneous dataset collections are needed in order to fulfill this potential. Recently, we have developed an interactive data browsing and visualization web application, the Gene Expression Browser (GXB). This tool can be used to overlay deep molecular phenotyping data with rich contextual information about analytes, samples and studies along with ancillary clinical or immunological profiling data. In this note, we describe a curated compendium of 93 public datasets generated in the context of human monocyte immunological studies, representing a total of 4,516 transcriptome profiles. Datasets were uploaded to an instance of GXB along with study description and sample annotations. Study samples were arranged in different groups. Ranked gene lists were generated based on relevant group comparisons. This resource is publicly available online at http://monocyte.gxbsidra.org/dm3/landing.gsp. PMID:27158452
How should the completeness and quality of curated nanomaterial data be evaluated?
NASA Astrophysics Data System (ADS)
Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.
2016-05-01
Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated?Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? Electronic supplementary information (ESI) available: (1) Detailed information regarding issues raised in the main text; (2) original survey responses. See DOI: 10.1039/c5nr08944a
Astromaterials Curation Online Resources for Principal Investigators
NASA Technical Reports Server (NTRS)
Todd, Nancy S.; Zeigler, Ryan A.; Mueller, Lina
2017-01-01
The Astromaterials Acquisition and Curation office at NASA Johnson Space Center curates all of NASA's extraterrestrial samples, the most extensive set of astromaterials samples available to the research community worldwide. The office allocates 1500 individual samples to researchers and students each year and has served the planetary research community for 45+ years. The Astromaterials Curation office provides access to its sample data repository and digital resources to support the research needs of sample investigators and to aid in the selection and request of samples for scientific study. These resources can be found on the Astromaterials Acquisition and Curation website at https://curator.jsc.nasa.gov. To better serve our users, we have engaged in several activities to enhance the data available for astromaterials samples, to improve the accessibility and performance of the website, and to address user feedback. We havealso put plans in place for continuing improvements to our existing data products.
Gene regulation knowledge commons: community action takes care of DNA binding transcription factors
Tripathi, Sushil; Vercruysse, Steven; Chawla, Konika; Christie, Karen R.; Blake, Judith A.; Huntley, Rachael P.; Orchard, Sandra; Hermjakob, Henning; Thommesen, Liv; Lægreid, Astrid; Kuiper, Martin
2016-01-01
A large gap remains between the amount of knowledge in scientific literature and the fraction that gets curated into standardized databases, despite many curation initiatives. Yet the availability of comprehensive knowledge in databases is crucial for exploiting existing background knowledge, both for designing follow-up experiments and for interpreting new experimental data. Structured resources also underpin the computational integration and modeling of regulatory pathways, which further aids our understanding of regulatory dynamics. We argue how cooperation between the scientific community and professional curators can increase the capacity of capturing precise knowledge from literature. We demonstrate this with a project in which we mobilize biological domain experts who curate large amounts of DNA binding transcription factors, and show that they, although new to the field of curation, can make valuable contributions by harvesting reported knowledge from scientific papers. Such community curation can enhance the scientific epistemic process. Database URL: http://www.tfcheckpoint.org PMID:27270715
How should the completeness and quality of curated nanomaterial data be evaluated?†
Marchese Robinson, Richard L.; Lynch, Iseult; Peijnenburg, Willie; Rumble, John; Klaessig, Fred; Marquardt, Clarissa; Rauscher, Hubert; Puzyn, Tomasz; Purian, Ronit; Åberg, Christoffer; Karcher, Sandra; Vriens, Hanne; Hoet, Peter; Hoover, Mark D.; Hendren, Christine Ogilvie; Harper, Stacey L.
2016-01-01
Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials’ behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated? PMID:27143028
Overview of the Cancer Genetics and Pathway Curation tasks of BioNLP Shared Task 2013
2015-01-01
Background Since their introduction in 2009, the BioNLP Shared Task events have been instrumental in advancing the development of methods and resources for the automatic extraction of information from the biomedical literature. In this paper, we present the Cancer Genetics (CG) and Pathway Curation (PC) tasks, two event extraction tasks introduced in the BioNLP Shared Task 2013. The CG task focuses on cancer, emphasizing the extraction of physiological and pathological processes at various levels of biological organization, and the PC task targets reactions relevant to the development of biomolecular pathway models, defining its extraction targets on the basis of established pathway representations and ontologies. Results Six groups participated in the CG task and two groups in the PC task, together applying a wide range of extraction approaches including both established state-of-the-art systems and newly introduced extraction methods. The best-performing systems achieved F-scores of 55% on the CG task and 53% on the PC task, demonstrating a level of performance comparable to the best results achieved in similar previously proposed tasks. Conclusions The results indicate that existing event extraction technology can generalize to meet the novel challenges represented by the CG and PC task settings, suggesting that extraction methods are capable of supporting the construction of knowledge bases on the molecular mechanisms of cancer and the curation of biomolecular pathway models. The CG and PC tasks continue as open challenges for all interested parties, with data, tools and resources available from the shared task homepage. PMID:26202570
2014-01-01
Background Uncovering the complex transcriptional regulatory networks (TRNs) that underlie plant and animal development remains a challenge. However, a vast amount of data from public microarray experiments is available, which can be subject to inference algorithms in order to recover reliable TRN architectures. Results In this study we present a simple bioinformatics methodology that uses public, carefully curated microarray data and the mutual information algorithm ARACNe in order to obtain a database of transcriptional interactions. We used data from Arabidopsis thaliana root samples to show that the transcriptional regulatory networks derived from this database successfully recover previously identified root transcriptional modules and to propose new transcription factors for the SHORT ROOT/SCARECROW and PLETHORA pathways. We further show that these networks are a powerful tool to integrate and analyze high-throughput expression data, as exemplified by our analysis of a SHORT ROOT induction time-course microarray dataset, and are a reliable source for the prediction of novel root gene functions. In particular, we used our database to predict novel genes involved in root secondary cell-wall synthesis and identified the MADS-box TF XAL1/AGL12 as an unexpected participant in this process. Conclusions This study demonstrates that network inference using carefully curated microarray data yields reliable TRN architectures. In contrast to previous efforts to obtain root TRNs, that have focused on particular functional modules or tissues, our root transcriptional interactions provide an overview of the transcriptional pathways present in Arabidopsis thaliana roots and will likely yield a plethora of novel hypotheses to be tested experimentally. PMID:24739361
Telomere biology: Rationale for diagnostics and therapeutics in cancer
Rousseau, Philippe; Autexier, Chantal
2015-01-01
The key step of carcinogenesis is the malignant transformation which is fundamentally a telomere biology dysfunction permitting cells to bypass the Hayflick limit and to divide indefinitely and uncontrollably. Thus all partners and structures involved in normal and abnormal telomere maintenance, protection and lengthening can be considered as potential anti-cancer therapeutic targets. In this Point of View we discuss, highlight and provide new perspectives from the current knowledge and understanding to position the different aspects of telomere biology and dysfunction as diagnostic, preventive and curative tools in the field of cancer. PMID:26291128
ZINC: A Free Tool to Discover Chemistry for Biology
2012-01-01
ZINC is a free public resource for ligand discovery. The database contains over twenty million commercially available molecules in biologically relevant representations that may be downloaded in popular ready-to-dock formats and subsets. The Web site also enables searches by structure, biological activity, physical property, vendor, catalog number, name, and CAS number. Small custom subsets may be created, edited, shared, docked, downloaded, and conveyed to a vendor for purchase. The database is maintained and curated for a high purchasing success rate and is freely available at zinc.docking.org. PMID:22587354
Gateways to the FANTOM5 promoter level mammalian expression atlas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lizio, Marina; Harshbarger, Jayson; Shimoji, Hisashi
The FANTOM5 project investigates transcription initiation activities in more than 1,000 human and mouse primary cells, cell lines and tissues using CAGE. Based on manual curation of sample information and development of an ontology for sample classification, we assemble the resulting data into a centralized data resource (http://fantom.gsc.riken.jp/5/). In conclusion, this resource contains web-based tools and data-access points for the research community to search and extract data related to samples, genes, promoter activities, transcription factors and enhancers across the FANTOM5 atlas.
Telomere biology: Rationale for diagnostics and therapeutics in cancer.
Rousseau, Philippe; Autexier, Chantal
2015-01-01
The key step of carcinogenesis is the malignant transformation which is fundamentally a telomere biology dysfunction permitting cells to bypass the Hayflick limit and to divide indefinitely and uncontrollably. Thus all partners and structures involved in normal and abnormal telomere maintenance, protection and lengthening can be considered as potential anti-cancer therapeutic targets. In this Point of View we discuss, highlight and provide new perspectives from the current knowledge and understanding to position the different aspects of telomere biology and dysfunction as diagnostic, preventive and curative tools in the field of cancer.
Gateways to the FANTOM5 promoter level mammalian expression atlas
Lizio, Marina; Harshbarger, Jayson; Shimoji, Hisashi; ...
2015-01-05
The FANTOM5 project investigates transcription initiation activities in more than 1,000 human and mouse primary cells, cell lines and tissues using CAGE. Based on manual curation of sample information and development of an ontology for sample classification, we assemble the resulting data into a centralized data resource (http://fantom.gsc.riken.jp/5/). In conclusion, this resource contains web-based tools and data-access points for the research community to search and extract data related to samples, genes, promoter activities, transcription factors and enhancers across the FANTOM5 atlas.
The Astromaterials X-Ray Computed Tomography Laboratory at Johnson Space Center
NASA Astrophysics Data System (ADS)
Zeigler, R. A.; Blumenfeld, E. H.; Srinivasan, P.; McCubbin, F. M.; Evans, C. A.
2018-04-01
The Astromaterials Curation Office has recently begun incorporating X-ray CT data into the curation processes for lunar and meteorite samples, and long-term curation of that data and serving it to the public represent significant technical challenges.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Lenhardt, W. C.; Young, J. W.; Gordon, L. C.; Hughes, S.; Santhana Vannan, S. K.
2017-12-01
The planning for and development of efficient workflows for the creation, reuse, sharing, documentation, publication and preservation of research data is a general challenge that research teams of all sizes face. In response to: requirements from funding agencies for full-lifecycle data management plans that will result in well documented, preserved, and shared research data products increasing requirements from publishers for shared data in conjunction with submitted papers interdisciplinary research team's needs for efficient data sharing within projects, and increasing reuse of research data for replication and new, unanticipated research, policy development, and public use alternative strategies to traditional data life cycle approaches must be developed and shared that enable research teams to meet these requirements while meeting the core science objectives of their projects within the available resources. In support of achieving these goals, the concept of Agile Data Curation has been developed in which there have been parallel activities in support of 1) identifying a set of shared values and principles that underlie the objectives of agile data curation, 2) soliciting case studies from the Earth science and other research communities that illustrate aspects of what the contributors consider agile data curation methods and practices, and 3) identifying or developing design patterns that are high-level abstractions from successful data curation practice that are related to common data curation problems for which common solution strategies may be employed. This paper provides a collection of case studies that have been contributed by the Earth science community, and an initial analysis of those case studies to map them to emerging shared data curation problems and their potential solutions. Following the initial analysis of these problems and potential solutions, existing design patterns from software engineering and related disciplines are identified as a starting point for the development of a catalog of data curation design patterns that may be reused in the design and execution of new data curation processes.
Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee
2012-01-01
Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239
Bowen, Michael E; Cavanaugh, Kerri L; Wolff, Kathleen; Davis, Dianne; Gregory, Rebecca P; Shintani, Ayumi; Eden, Svetlana; Wallston, Ken; Elasy, Tom; Rothman, Russell L
2016-08-01
To compare the effectiveness of different approaches to nutrition education in diabetes self-management education and support (DSME/S). We randomized 150 adults with type 2 diabetes to either certified diabetes educator (CDE)-delivered DSME/S with carbohydrate gram counting or the modified plate method versus general health education. The primary outcome was change in HbA1C over 6 months. At 6 months, HbA1C improved within the plate method [-0.83% (-1.29, -0.33), P<0.001] and carbohydrate counting [-0.63% (-1.03, -0.18), P=0.04] groups but not the control group [P=0.34]. Change in HbA1C from baseline between the control and intervention groups was not significant at 6 months (carbohydrate counting, P=0.36; modified plate method, P=0.08). In a pre-specified subgroup analysis of patients with a baseline HbA1C 7-10%, change in HbA1C from baseline improved in the carbohydrate counting [-0.86% (-1.47, -0.26), P=0.006] and plate method groups [-0.76% (-1.33, -0.19), P=0.01] compared to controls. CDE-delivered DSME/S focused on carbohydrate counting or the modified plate method improved glycemic control in patients with an initial HbA1C between 7 and 10%. Both carbohydrate counting and the modified plate method improve glycemic control as part of DSME/S. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Comellas, Mariceli; Walker, Elizabeth A; Movsas, Sharon; Merkin, Sheryl; Zonszein, Joel; Strelnick, Hal
2010-01-01
To develop, implement, and evaluate a peer-led diabetes self-management support program in English and Spanish for a diverse, urban, low-income population. The program goals and objectives were to improve diabetes self-management behaviors, especially becoming more physically active, healthier eating, medication adherence, problem solving, and goal setting. After a new training program for peers led by a certified diabetes educator (CDE) was implemented with 5 individuals, this pilot evaluation study was conducted in 2 community settings in the East and South Bronx. Seventeen adults with diabetes participated in the new peer-led 5-session program. Survey data were collected pre- and postintervention on diabetes self-care activities, quality of well-being, and number of steps using a pedometer. This pilot study established the acceptance and feasibility of both the peer training program and the community-based, peer-led program for underserved, minority adults with diabetes. Significant improvements were found in several physical activity and nutrition activities, with a modest improvement in well-being. Feedback from both peer facilitators and participants indicated that a longer program, but with the same educational materials, was desirable. To reduce health disparities in urban communities, it is essential to continue program evaluation of the critical elements of peer-led programs for multiethnic adults with diabetes to promote self-management support in a cost-effective and culturally appropriate manner. Practice Implications A diabetes self-management support program can be successfully implemented in the community by peers, within a model including remote supervision by a CDE.
Kim, Dong-Hyun; Wee, Won-Ryang; Lee, Jin-Hak
2010-01-01
Purpose To compare the intraoperative performances and postoperative outcomes of cataract surgery performed with longitudinal phacoemulsification and torsional phacoemulsification in moderate and hard cataracts. Methods Of 85 patients who had senile cataracts, 102 eyes were operated on using the Infiniti Vision System. Preoperative examinations (slit lamp examination, mean central corneal thickness, and central endothelial cell counts) were performed for each patient. Cataracts were subdivided into moderate and hard, according to the Lens Opacities Classification System III grading of nucleus opalescence (NO). Eyes in each cataract group were randomly assigned to conventional and torsional phaco-mode. Intraoperative parameters, including ultrasound time (UST), cumulative dissipated energy (CDE), and the balanced salt solution plus (BSSP) volume utilized were evaluated. Best corrected visual acuity (BCVA) was checked on postoperative day 30; mean central corneal thickness and central endothelial cell counts were investigated on postoperative days 7 and 30. Results Preoperative BCVA and mean grading of NO showed no difference in both groups. Preoperative endothelial cell count and central corneal thickness also showed no significant difference in both groups. In the moderate cataract group, the CDE, UST, and BSSP volume were significantly lower in the torsional mode than the longitudinal mode, but they did not show any difference in the hard cataract group. Torsional group showed less endothelial cell loss and central corneal thickening at postoperative day seven in moderate cataracts but showed no significant differences, as compared with the longitudinal group, by postoperative day 30. Conclusions Torsional phacoemulsification showed superior efficiency for moderate cataracts, as compared with longitudinal phacoemulsification, in the early postoperative stage. PMID:21165231
Kim, Dong-Hyun; Wee, Won-Ryang; Lee, Jin-Hak; Kim, Mee-Kum
2010-12-01
To compare the intraoperative performances and postoperative outcomes of cataract surgery performed with longitudinal phacoemulsification and torsional phacoemulsification in moderate and hard cataracts. Of 85 patients who had senile cataracts, 102 eyes were operated on using the Infiniti Vision System. Preoperative examinations (slit lamp examination, mean central corneal thickness, and central endothelial cell counts) were performed for each patient. Cataracts were subdivided into moderate and hard, according to the Lens Opacities Classification System III grading of nucleus opalescence (NO). Eyes in each cataract group were randomly assigned to conventional and torsional phaco-mode. Intraoperative parameters, including ultrasound time (UST), cumulative dissipated energy (CDE), and the balanced salt solution plus (BSSP) volume utilized were evaluated. Best corrected visual acuity (BCVA) was checked on postoperative day 30; mean central corneal thickness and central endothelial cell counts were investigated on postoperative days 7 and 30. Preoperative BCVA and mean grading of NO showed no difference in both groups. Preoperative endothelial cell count and central corneal thickness also showed no significant difference in both groups. In the moderate cataract group, the CDE, UST, and BSSP volume were significantly lower in the torsional mode than the longitudinal mode, but they did not show any difference in the hard cataract group. Torsional group showed less endothelial cell loss and central corneal thickening at postoperative day seven in moderate cataracts but showed no significant differences, as compared with the longitudinal group, by postoperative day 30. Torsional phacoemulsification showed superior efficiency for moderate cataracts, as compared with longitudinal phacoemulsification, in the early postoperative stage.
Reuschel, Anna; Bogatsch, Holger; Barth, Thomas; Wiedemann, Renate
2010-11-01
To compare the intraoperative and postoperative outcomes of conventional longitudinal phacoemulsification and torsional phacoemulsification. Department of Ophthalmology, University of Leipzig, Germany. Randomized single-center clinical trial. Eyes with senile cataract were randomized to have phacoemulsification using the Infiniti Vision System and the torsional mode (OZil) or conventional longitudinal mode. Primary outcomes were corrected distance visual acuity (CDVA) and central endothelial cell density (ECD), calculated according to the Conference on Harmonisation-E9 Guidelines in which missing values were substituted by the median in each group (primary analysis) and the loss was then calculated using actual data (secondary analysis). Secondary outcomes were ultrasound (US) time, cumulative dissipated energy (CDE), and percentage total equivalent power in position 3. Postoperative follow-up was at 3 months. The mean preoperative CDVA was 0.41 logMAR in the torsional group and 0.38 logMAR in the longitudinal group, improving to 0.07 logMAR postoperatively in both groups. The mean ECD loss was 7.2% ± 4.6% in the torsional group (72 patients) and 7.1% ± 4.4% in the longitudinal group (76 patients), with no statistically significant differences in the primary analysis (P = .342) or secondary analysis (P = .906). The mean US time, CDE, and percentage total equivalent power in position 3 were statistically significantly lower in the torsional group (98 patients) than in the longitudinal group (94 patients) (P<.001). The torsional mode was as safe as the longitudinal mode in phacoemulsification for age-related cataract. Copyright © 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Microcoaxial cataract surgery outcomes: comparison of 1.8 mm system and 2.2 mm system.
Lee, Kyung-Min; Kwon, Hyung-Goo; Joo, Choun-Ki
2009-05-01
To compare clinical outcomes of a 1.8 mm and a 2.2 mm microcoaxial cataract surgery system. Department of Ophthalmology and Visual Science, Kangnam St. Mary's Hospital, College of Medicine, Catholic University of Korea, Seoul, Korea. In a prospective study, eyes were randomly selected to have phacoemulsification using a Stellaris system or an Intrepid Infiniti system. The initial incision size was 1.8 mm and 2.2 mm, respectively. Measured intraoperative parameters included phacoemulsification time, mean cumulative dissipated ultrasound energy (CDE), change in incision size at each step of surgery, and total volume of balanced salt solution (BSS) used. The best corrected visual acuity (BCVA), corneal astigmatism, corneal thickness, and endothelial cell count were evaluated preoperatively and postoperatively. The study evaluated 86 eyes of 78 patients (43 eyes in each group). There were no significant differences in postoperative BCVA, surgically induced astigmatism, or amount of BSS used between the 2 systems (P >.05). However, for high-density cataracts, the 1.8 mm group had a greater change between the initial incision size and the incision size after phacoemulsification (P = .019, nuclear opalescence [NO] NO3; P = .001, NO4), a longer phacoemulsification time (P = .013, NO3), greater mean CDE (P = .005, NO3; P = .001, NO4), and greater corneal endothelial cell loss (P = .003, NO4). Both systems were safe and effective in microcoaxial phacoemulsification. The 1.8 mm system performed better with cortical-type cataract and the 2.2 mm system, with high-density nuclear-type cataract.
Results of endocapsular phacofracture debulking of hard cataracts
Davison, James A
2015-01-01
Purpose/aim of the study To present a phacoemulsification technique for hard cataracts and compare postoperative results using two different ultrasonic tip motions during quadrant removal. Materials and methods A phacoemulsification technique which employs in situ fracture and endocapsular debulking for hard cataracts is presented. The prospective study included 56 consecutive cases of hard cataract (LOCS III NC [Lens Opacification Classification System III, nuclear color], average 4.26), which were operated using the Infiniti machine and the Partial Kelman tip. Longitudinal tip movement was used for sculpting for all cases which were randomized to receive longitudinal or torsional/interjected longitudinal (Intelligent Phaco [IP]) strategies for quadrant removal. Measurements included cumulative dissipated energy (CDE), 3 months postoperative surgically induced astigmatism (SIA), and corneal endothelial cell density (ECD) losses. Results No complications were recorded in any of the cases. Respective overall and longitudinal vs IP means were as follows: CDE, 51.6±15.6 and 55.7±15.5 vs 48.6±15.1; SIA, 0.36±0.2 D and 0.4±0.2 D vs 0.3±0.2 D; and mean ECD loss, 4.1%±10.8% and 5.9%±13.4% vs 2.7%±7.8%. The differences between longitudinal and IP were not significant for any of the three categories. Conclusion The endocapsular phacofracture debulking technique is safe and effective for phacoemulsification of hard cataracts using longitudinal or torsional IP strategies for quadrant removal with the Infiniti machine and Partial Kelman tip. PMID:26203213
Ozil IP torsional mode versus combined torsional/longitudinal microcoaxial phacoemulsification.
Helvacioglu, Firat; Tunc, Zeki; Yeter, Celal; Oguzhan, Hasan; Sencan, Sadik
2012-01-01
To compare the safety and efficacy of microcoaxial phacoemulsification surgeries performed with the Ozil Intelligent Phaco (IP) torsional mode and combined torsional/longitudinal ultrasound (US) mode using the Infiniti Vision System (Alcon Laboratories). In this prospective randomized comparative study, 60 eyes were assigned to 2.2-mm microcoaxial phacoemulsification using the Ozil IP torsional mode (group 1) or combined torsional/longitudinal US mode (group 2). The primary outcome measures were US time (UST), cumulative dissipated energy (CDE), longitudinal and torsional ultrasound amplitudes, mean operation time, mean volume of balanced salt solution (BSS) used, and surgical complications. Both groups included 30 eyes. Mean UST, CDE, and longitudinal and torsional ultrasound amplitudes in group 1 were 1 minute 15±34.33 seconds, 8.74±5.64, 0.43±0.74, and 25.56±8.56, respectively, and these parameters in group 2 were 1 minute 40±51.44 seconds, 9.28±5.99, 3.64±1.55, and 3.71±1.34, respectively. UST and longitudinal amplitudes were found to be significantly low in group 1 (p<0.001, p<0.001), whereas torsional amplitude was found to be significantly high in this group (p=0.001). Mean volumes of BSS used in groups 1 and 2 were 63.30±18.00 cc and 84.50±28.65 cc, respectively (p=0.001). The Ozil IP torsional mode may provide more effective lens removal than the combined torsional/longitudinal US mode with a lower UST and volume of BSS used.
High-energy laser tactical decision aid (HELTDA) for mission planning and predictive avoidance
NASA Astrophysics Data System (ADS)
Burley, Jarred L.; Fiorino, Steven T.; Randall, Robb M.; Bartell, Richard J.; Cusumano, Salvatore J.
2012-06-01
This study demonstrates the development of a high energy laser tactical decision aid (HELTDA) by the AFIT/CDE for mission planning High Energy Laser (HEL) weapon system engagements as well as centralized, decentralized, or hybrid predictive avoidance (CPA/DPA/HPA) assessments. Analyses of example HEL mission engagements are described as well as how mission planners are expected to employ the software. Example HEL engagement simulations are based on geographic location and recent/current atmospheric weather conditions. The atmospheric effects are defined through the AFIT/CDE Laser Environmental Effects Definition and Reference (LEEDR) model or the High Energy Laser End-to-End Operational Simulation (HELEEOS) model upon which the HELTDA is based. These models enable the creation of vertical profiles of temperature, pressure, water vapor content, optical turbulence, and atmospheric particulates and hydrometeors as they relate to line-by-line layer extinction coefficient magnitude at wavelengths from the UV to the RF. Seasonal and boundary layer variations (summer/winter) and time of day variations for a range of relative humidity percentile conditions are considered to determine optimum efficiency in a specific environment. Each atmospheric particulate/hydrometeor is evaluated based on its wavelength-dependent forward and off-axis scattering characteristics and absorption effects on the propagating environment to and beyond the target. In addition to realistic vertical profiles of molecular and aerosol absorption and scattering, correlated optical turbulence profiles in probabilistic (percentile) format are included. Numerical weather model forecasts are incorporated in the model to develop comprehensive understanding of HEL weapon system performance.
Nelson, Lindsay D; Ranson, Jana; Ferguson, Adam R; Giacino, Joseph; Okonkwo, David O; Valadka, Alex; Manley, Geoffrey; McCrea, Michael
2017-06-08
The Glasgow Outcome Scale-Extended (GOSE) is often the primary outcome measure in clinical trials for traumatic brain injury (TBI). Although the GOSE's capture of global function outcome has several strengths, concerns have been raised about its limited ability to identify mild disability and failure to capture the full scope of problems patients exhibit after TBI. This analysis examined the convergence of disability ratings across a multidimensional set of outcome domains in the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) Pilot study. The study collected measures recommended by the TBI Common Data Elements (CDE) Workgroup. Patients presenting to 3 emergency departments with a TBI of any severity enrolled in TRACK-TBI prospectively after injury; outcome measures were collected at 3 and six months postinjury. Analyses examined frequency of impairment and overlap between impairment status across the CDE outcome domains of Global Level of Functioning (GOSE), Neuropsychological (cognitive) Impairment, Psychological Status, TBI Symptoms, and Quality of Life. GOSE score correlated in the expected direction with other outcomes (M Spearman's rho = .21 and .49 with neurocognitive and self-report outcomes, respectively). The subsample in the Upper Good Recovery (GOSE 8) category appeared quite healthy across most other outcomes, although 19.0% had impaired executive functioning (Trail Making Test Part B). A significant minority of participants in the Lower Good Recovery subgroup (GOSE 7) met criteria for impairment across numerous other outcome measures. The findings highlight the multidimensional nature of TBI recovery and the limitations of applying only a single outcome measure.
Rushakoff, Robert J; Sullivan, Mary M; Seley, Jane Jeffrie; Sadhu, Archana; O'Malley, Cheryl W; Manchester, Carol; Peterson, Eric; Rogers, Kendall M
2014-09-01
establishing an inpatient glycemic control program is challenging, requires years of work, significant education and coordination of medical, nursing, dietary, and pharmacy staff, and support from administration and Performance Improvement departments. We undertook a 2 year quality improvement project assisting 10 medical centers (academic and community) across the US to implement inpatient glycemic control programs. the project was comprised of 3 interventions. (1) One day site visit with a faculty team (MD and CDE) to meet with key personnel, identify deficiencies and barriers to change, set site specific goals and develop strategies and timelines for performance improvement. (2) Three webinar follow-up sessions. (3) Web site for educational resources. Updates, challenges, and accomplishments for each site were reviewed at the time of each webinar and progress measured at the completion of the project with an evaluation questionnaire. as a result of our intervention, institutions revised and simplified formularies and insulin order sets (with CHO counting options); implemented glucometrics and CDE monitoring of inpatient glucoses (assisting providers with orders); added new protocols for DKA and perinatal treatment; and implemented nursing, physician and patient education initiatives. Changes were institution specific, fitting the local needs and cultures. As to the extent to which Institution׳s goals were satisfied: 2 reported "completely", 4 "mostly," 3 "partially," and 1 "marginally". Institutions continue to move toward fulfilling their goals. an individualized, structured, performance improvement approach with expert faculty mentors can help facilitate change in an institution dedicated to implementing an inpatient glycemic control program. Copyright © 2014 Elsevier Inc. All rights reserved.
Luo, Heng-Cong; Li, Na; Yan, Li; Mai, Kai-Jin; Sun, Kan; Wang, Wei; Lao, Guo-Juan; Yang, Chuan; Zhang, Li-Ming; Ren, Meng
2017-01-01
Several biological barriers must be overcome to achieve efficient nonviral gene delivery. These barriers include target cell uptake, lysosomal degradation, and dissociation from the carrier. In this study, we compared the differences in the uptake mechanism of cationic, star-shaped polymer/MMP-9siRNA complexes (β-CD-(D3)7/MMP-9siRNA complexes: polyplexes) and commercial liposome/MMP-9siRNA complexes (Lipofectamine ® 2000/MMP-9siRNA complexes: liposomes). The uptake pathway and transfection efficiency of the polyplexes and liposomes were determined by fluorescence microscopy, flow cytometry, and reverse transcriptase-polymerase chain reaction. The occurrence of intracellular processing was assessed by confocal laser scanning microscopy. Endosomal acidification inhibitors were used to explore the endosomal escape mechanisms of the polyplexes and lysosomes. We concluded that the polyplexes were internalized by non-caveolae- and non-clathrin-mediated pathways, with no lysosomal trafficking, thereby inducing successful transfection, while the majority of liposomes were internalized by clathrin-dependent endocytosis (CDE), caveolae-mediated endocytosis, and macropinocytosis, and only CDE induced successful transfection. Liposomes might escape more quickly than polyplexes, and the digestion effect of acidic organelles on liposomes was faint compared to the polyplexes, although both complexes escaped from endolysosomes via the proton sponge mechanism. This may be the key aspect that leads to the lower transfection efficiency of the β-CD-(D3)7/MMP-9siRNA complexes. The present study may offer some insights for the rational design of novel delivery systems with increased transfection efficiency but decreased toxicity.
The PPARγ2 A/B-Domain Plays a Gene-Specific Role in Transactivation and Cofactor Recruitment
Bugge, Anne; Grøntved, Lars; Aagaard, Mads M.; Borup, Rehannah; Mandrup, Susanne
2009-01-01
We have previously shown that adenoviral expression of peroxisome proliferator-activated receptors (PPARs) leads to rapid establishment of transcriptionally active complexes and activation of target gene expression within 5–8 h after transduction. Here we have used the adenoviral delivery system combined with expression array analysis to identify novel putative PPARγ target genes in murine fibroblasts and to determine the role of the A/B-domain in PPARγ-mediated transactivation of genomic target genes. Of the 257 genes found to be induced by PPARγ2 expression, only 25 displayed A/B-domain dependency, i.e. significantly reduced induction in the cells expressing the truncated PPARγ lacking the A/B-domain (PPARγCDE). Nine of the 25 A/B-domain-dependent genes were involved in lipid storage, and in line with this, triglyceride accumulation was considerably decreased in the cells expressing PPARγCDE compared with cells expressing full-length PPARγ2. Using chromatin immunoprecipitation, we demonstrate that PPARγ binding to genomic target sites and recruitment of the mediator component TRAP220/MED1/PBP/DRIP205 is not affected by the deletion of the A/B-domain. By contrast, the PPARγ-mediated cAMP response element-binding protein (CREB)-binding protein (CBP) and p300 recruitment to A/B-domain-dependent target genes is compromised by deletion of the A/B-domain. These results indicate that the A/B-domain of PPARγ2 is specifically involved in the recruitment or stabilization of CBP- and p300-containing cofactor complexes to a subset of target genes. PMID:19282365
Natural Language Processing in aid of FlyBase curators
Karamanis, Nikiforos; Seal, Ruth; Lewin, Ian; McQuilton, Peter; Vlachos, Andreas; Gasperin, Caroline; Drysdale, Rachel; Briscoe, Ted
2008-01-01
Background Despite increasing interest in applying Natural Language Processing (NLP) to biomedical text, whether this technology can facilitate tasks such as database curation remains unclear. Results PaperBrowser is the first NLP-powered interface that was developed under a user-centered approach to improve the way in which FlyBase curators navigate an article. In this paper, we first discuss how observing curators at work informed the design and evaluation of PaperBrowser. Then, we present how we appraise PaperBrowser's navigational functionalities in a user-based study using a text highlighting task and evaluation criteria of Human-Computer Interaction. Our results show that PaperBrowser reduces the amount of interactions between two highlighting events and therefore improves navigational efficiency by about 58% compared to the navigational mechanism that was previously available to the curators. Moreover, PaperBrowser is shown to provide curators with enhanced navigational utility by over 74% irrespective of the different ways in which they highlight text in the article. Conclusion We show that state-of-the-art performance in certain NLP tasks such as Named Entity Recognition and Anaphora Resolution can be combined with the navigational functionalities of PaperBrowser to support curation quite successfully. PMID:18410678
Text Mining to Support Gene Ontology Curation and Vice Versa.
Ruch, Patrick
2017-01-01
In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.