Sample records for information database edition

  1. REDIdb: the RNA editing database.

    PubMed

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  2. An automated system for terrain database construction

    NASA Technical Reports Server (NTRS)

    Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.

    1987-01-01

    An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.

  3. REDIdb: an upgraded bioinformatics resource for organellar RNA editing sites.

    PubMed

    Picardi, Ernesto; Regina, Teresa M R; Verbitskiy, Daniil; Brennicke, Axel; Quagliariello, Carla

    2011-03-01

    RNA editing is a post-transcriptional molecular process whereby the information in a genetic message is modified from that in the corresponding DNA template by means of nucleotide substitutions, insertions and/or deletions. It occurs mostly in organelles by clade-specific diverse and unrelated biochemical mechanisms. RNA editing events have been annotated in primary databases as GenBank and at more sophisticated level in the specialized databases REDIdb, dbRES and EdRNA. At present, REDIdb is the only freely available database that focuses on the organellar RNA editing process and annotates each editing modification in its biological context. Here we present an updated and upgraded release of REDIdb with a web-interface refurbished with graphical and computational facilities that improve RNA editing investigations. Details of the REDIdb features and novelties are illustrated and compared to other RNA editing databases. REDIdb is freely queried at http://biologia.unical.it/py_script/REDIdb/. Copyright © 2010 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  4. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  5. Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  6. Data-Base Software For Tracking Technological Developments

    NASA Technical Reports Server (NTRS)

    Aliberti, James A.; Wright, Simon; Monteith, Steve K.

    1996-01-01

    Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.

  7. A Bibliography of Publications about the Educational Resources Information Center (Covering the Period 1985-1988).

    ERIC Educational Resources Information Center

    Brandhorst, Ted, Ed.

    The result of a comprehensive search for writings about the Educational Resources Information Center (ERIC) published between 1985 and 1988, this annotated bibliography lists 107 documents and journal articles about ERIC that were entered in the ERIC database during that period. The 1964-1978 edition cited 269 items. The 1979-1984 edition cited…

  8. Presence and Accuracy of Drug Dosage Recommendations for Continuous Renal Replacement Therapy in Tertiary Drug Information References

    PubMed Central

    Gorman, Sean K; Slavik, Richard S; Lam, Stefanie

    2012-01-01

    Background: Clinicians commonly rely on tertiary drug information references to guide drug dosages for patients who are receiving continuous renal replacement therapy (CRRT). It is unknown whether the dosage recommendations in these frequently used references reflect the most current evidence. Objective: To determine the presence and accuracy of drug dosage recommendations for patients undergoing CRRT in 4 drug information references. Methods: Medications commonly prescribed during CRRT were identified from an institutional medication inventory database, and evidence-based dosage recommendations for this setting were developed from the primary and secondary literature. The American Hospital Formulary System—Drug Information (AHFS–DI), Micromedex 2.0 (specifically the DRUGDEX and Martindale databases), and the 5th edition of Drug Prescribing in Renal Failure (DPRF5) were assessed for the presence of drug dosage recommendations in the CRRT setting. The dosage recommendations in these tertiary references were compared with the recommendations derived from the primary and secondary literature to determine concordance. Results: Evidence-based drug dosage recommendations were developed for 33 medications administered in patients undergoing CRRT. The AHFS–DI provided no dosage recommendations specific to CRRT, whereas the DPRF5 provided recommendations for 27 (82%) of the medications and the Micromedex 2.0 application for 20 (61%) (13 [39%] in the DRUGDEX database and 16 [48%] in the Martindale database, with 9 medications covered by both). The dosage recommendations were in concordance with evidence-based recommendations for 12 (92%) of the 13 medications in the DRUGDEX database, 26 (96%) of the 27 in the DPRF5, and all 16 (100%) of those in the Martindale database. Conclusions: One prominent tertiary drug information resource provided no drug dosage recommendations for patients undergoing CRRT. However, 2 of the databases in an Internet-based medical information application and the latest edition of a renal specialty drug information resource provided recommendations for a majority of the medications investigated. Most dosage recommendations were similar to those derived from the primary and secondary literature. The most recent edition of the DPRF is the preferred source of information when prescribing dosage regimens for patients receiving CRRT. PMID:22783029

  9. The International Handbook of Universities. Twenty-Second Edition

    ERIC Educational Resources Information Center

    Palgrave Macmillan, 2010

    2010-01-01

    The new "International Handbook of Universities" is now 2-volumes and includes single-user access to World Higher Education Database Online. This "Twenty-second Edition" is the most comprehensive guide to university-level education worldwide, providing detailed information on higher education institutions that offer at least a post-graduate degree…

  10. Library Micro-Computing, Vol. 2. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 19 articles pertaining to library microcomputing appear in this collection, the second of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  11. [Presence of the biomedical periodicals of Hungarian editions in international databases].

    PubMed

    Vasas, Lívia; Hercsel, Imréné

    2006-01-15

    Presence of the biomedical periodicals of Hungarian editions in international databases. The majority of Hungarian scientific results in medical and related sciences are published in scientific periodicals of foreign edition with high impact factor (IF) values, and they appear in international scientific literature in foreign languages. In this study the authors dealt with the presence and registered citation in international databases of those periodicals only, which had been published in Hungary and/or in cooperation with foreign publishing companies. The examination went back to year 1980 and covered a 25-year long period. 110 periodicals were selected for more detailed examination. The authors analyzed the situation of the current periodicals in the three most often visited databases (MEDLINE, EMBASE, Web of Science), and discovered, that the biomedical scientific periodicals of Hungarian interests were not represented with reasonable emphasis in the relevant international bibliographic databases. Because of the great number of data the scientific literature of medicine and related sciences could not be represented in its entirety, this publication, however, might give useful information for the inquirers, and call the attention of the competent people.

  12. Library Micro-Computing, Vol. 1. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 18 articles pertaining to library microcomputing appear in this collection, the first of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1) an integrated library…

  13. Competitive Intelligence: Finding the Clues Online.

    ERIC Educational Resources Information Center

    Combs, Richard; Moorhead, John

    1990-01-01

    Defines and discusses competitive intelligence for business decision making and suggests the use of online databases to start looking for relevant information. The best databases to use are described, designing the search strategy is explained, reviewing and editing results are discussed, and the presentation of results is considered. (LRW)

  14. Choosing the Right Database Management Program.

    ERIC Educational Resources Information Center

    Vockell, Edward L.; Kopenec, Donald

    1989-01-01

    Provides a comparison of four database management programs commonly used in schools: AppleWorks, the DOS 3.3 and ProDOS versions of PFS, and MECC's Data Handler. Topics discussed include information storage, spelling checkers, editing functions, search strategies, graphs, printout formats, library applications, and HyperCard. (LRW)

  15. Description of 'REQUEST-KYUSHYU' for KYUKEICHO regional data base

    NASA Astrophysics Data System (ADS)

    Takimoto, Shin'ichi

    Kyushu Economic Research Association (a foundational juridical person) initiated the regional database services, ' REQUEST-Kyushu ' recently. It is the full scale databases compiled based on the information and know-hows which the Association has accumulated over forty years. It covers the regional information database for journal and newspaper articles, and statistical information database for economic statistics. As to the former database it is searched on a personal computer and then a search result (original text) is sent through a facsimile. As to the latter, it is also searched on a personal computer where the data is processed, edited or downloaded. This paper describes characteristics, content and the system outline of 'REQUEST-Kyushu'.

  16. A Parent's Guide to the ERIC Database. Where To Turn with Your Questions about Schooling. Revised Edition.

    ERIC Educational Resources Information Center

    Howley, Craig B.; And Others

    This guide explains what the Educational Resources Information Center (ERIC) database is and how it can be used by parents to learn more about schooling and parenting. The guide also presents sample records of 55 documents in the ERIC database. The cited resources are particularly relevant to parents' concerns about meeting children's basic needs,…

  17. Health indicators 1991.

    PubMed

    Dawson, N

    1991-01-01

    This is the second edition of a database developed by the Canadian Centre for Health Information (CCHI). It features 49 health indicators, under one cover containing the most recent data available from a variety of national surveys. This information may be used to establish health goals for the population and to offer objective measures of their success. The database can be accessed through CANSIM, Statistics Canada's socio-economic electronic database and retrieval system, or through a personal computer package which enables the user to retrieve and analyze the 1.2 million data points in the system.

  18. The HITRAN molecular data base - Editions of 1991 and 1992

    NASA Technical Reports Server (NTRS)

    Rothman, Laurence S.; Gamache, R. R.; Tipping, R. H.; Rinsland, C. P.; Smith, M. A. H.; Benner, D. C.; Devi, V. M.; Flaud, J.-M.; Camy-Peyret, C.; Perrin, A.

    1992-01-01

    We describe in this paper the modifications, improvements, and enhancements to the HITRAN molecular absorption database that have occurred in the two editions of 1991 and 1992. The current database includes line parameters for 31 species and their isotopomers that are significant for terrestrial atmospheric studies. This line-by-line portion of HITRAN presently contains about 709,000 transitions between 0 and 23,000/cm and contains three molecules not present in earlier versions: COF2, SF6, and H2S. The HITRAN compilation has substantially more information on chlorofluorocarbons and other molecular species that exhibit dense spectra which are not amenable to line-by-line representation. The user access of the database has been advanced, and new media forms are now available for use on personal computers.

  19. The AJCC 8th Edition Staging System for Soft Tissue Sarcoma of the Extremities or Trunk: A Cohort Study of the SEER Database.

    PubMed

    Cates, Justin M M

    2018-02-01

    Background: The AJCC recently published the 8th edition of its cancer staging system. Significant changes were made to the staging algorithm for soft tissue sarcoma (STS) of the extremities or trunk, including the addition of 2 additional T (size) classifications in lieu of tumor depth and grouping lymph node metastasis (LNM) with distant metastasis as stage IV disease. Whether these changes improve staging system performance is questionable. Patients and Methods: This retrospective cohort analysis of 21,396 adult patients with STS of the extremity or trunk in the SEER database compares the AJCC 8th edition staging system with the 7th edition and a newly proposed staging algorithm using a variety of statistical techniques. The effect of tumor size on disease-specific survival was assessed by flexible, nonlinear Cox proportional hazard regression using restricted cubic splines and fractional polynomials. Results: The slope of covariate-adjusted log hazards for sarcoma-specific survival decreases for tumors >8 cm in greatest dimension, limiting prognostic information contributed by the new T4 classification in the AJCC 8th edition. Anatomic depth independently provides significant prognostic information. LNM is not equivalent to distant, non-nodal metastasis. Based on these findings, an alternative staging system is proposed and demonstrated to outperform both AJCC staging schemes. The analyses presented also disclose no evidence of improved clinical performance of the 8th edition compared with the previous edition. Conclusions: The AJCC 8th edition staging system for STS is no better than the previous 7th edition. Instead, a proposed staging system based on histologic grade, tumor size, and anatomic depth shows significantly higher predictive accuracy, with higher model concordance than either AJCC staging system. Changes to existing staging systems should improve the performance of prognostic models. Until such improvements are documented, AJCC committees should refrain from modifying established staging schemes. Copyright © 2018 by the National Comprehensive Cancer Network.

  20. [Design and establishment of modern literature database about acupuncture Deqi].

    PubMed

    Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao

    2015-02-01

    A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.

  1. Directory of On-Line Networks, Databases and Bulletin Boards on Assistive Technology. Second Edition. RESNA Technical Assistance Project.

    ERIC Educational Resources Information Center

    RESNA: Association for the Advancement of Rehabilitation Technology, Washington, DC.

    This resource directory provides a selective listing of electronic networks, online databases, and bulletin boards that highlight technology-related services and products. For each resource, the following information is provided: name, address, and telephone number; description; target audience; hardware/software needs to access the system;…

  2. HOWDY: an integrated database system for human genome research

    PubMed Central

    Hirakawa, Mika

    2002-01-01

    HOWDY is an integrated database system for accessing and analyzing human genomic information (http://www-alis.tokyo.jst.go.jp/HOWDY/). HOWDY stores information about relationships between genetic objects and the data extracted from a number of databases. HOWDY consists of an Internet accessible user interface that allows thorough searching of the human genomic databases using the gene symbols and their aliases. It also permits flexible editing of the sequence data. The database can be searched using simple words and the search can be restricted to a specific cytogenetic location. Linear maps displaying markers and genes on contig sequences are available, from which an object can be chosen. Any search starting point identifies all the information matching the query. HOWDY provides a convenient search environment of human genomic data for scientists unsure which database is most appropriate for their search. PMID:11752279

  3. Process evaluation distributed system

    NASA Technical Reports Server (NTRS)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  4. Reference Manual for Machine-Readable Bibliographic Descriptions. Second Revised Edition.

    ERIC Educational Resources Information Center

    Dierickx, H., Ed.; Hopkinson, A., Ed.

    A product of the UNISIST International Centre for Bibliographic Descriptions (UNIBIB), this reference manual presents a standardized communication format for the exchange of machine-readable bibliographic information between bibliographic databases or other types of bibliographic information services, including libraries. The manual is produced in…

  5. Identification of genomic sites for CRISPR/Cas9-based genome editing in the Vitis vinifera genome.

    PubMed

    Wang, Yi; Liu, Xianju; Ren, Chong; Zhong, Gan-Yuan; Yang, Long; Li, Shaohua; Liang, Zhenchang

    2016-04-21

    CRISPR/Cas9 has been recently demonstrated as an effective and popular genome editing tool for modifying genomes of humans, animals, microorganisms, and plants. Success of such genome editing is highly dependent on the availability of suitable target sites in the genomes to be edited. Many specific target sites for CRISPR/Cas9 have been computationally identified for several annual model and crop species, but such sites have not been reported for perennial, woody fruit species. In this study, we identified and characterized five types of CRISPR/Cas9 target sites in the widely cultivated grape species Vitis vinifera and developed a user-friendly database for editing grape genomes in the future. A total of 35,767,960 potential CRISPR/Cas9 target sites were identified from grape genomes in this study. Among them, 22,597,817 target sites were mapped to specific genomic locations and 7,269,788 were found to be highly specific. Protospacers and PAMs were found to distribute uniformly and abundantly in the grape genomes. They were present in all the structural elements of genes with the coding region having the highest abundance. Five PAM types, TGG, AGG, GGG, CGG and NGG, were observed. With the exception of the NGG type, they were abundantly present in the grape genomes. Synteny analysis of similar genes revealed that the synteny of protospacers matched the synteny of homologous genes. A user-friendly database containing protospacers and detailed information of the sites was developed and is available for public use at the Grape-CRISPR website ( http://biodb.sdau.edu.cn/gc/index.html ). Grape genomes harbour millions of potential CRISPR/Cas9 target sites. These sites are widely distributed among and within chromosomes with predominant abundance in the coding regions of genes. We developed a publicly-accessible Grape-CRISPR database for facilitating the use of the CRISPR/Cas9 system as a genome editing tool for functional studies and molecular breeding of grapes. Among other functions, the database allows users to identify and select multi-protospacers for editing similar sequences in grape genomes simultaneously.

  6. The development of a dynamic software for the user interaction from the geographic information system environment with the database of the calibration site of the satellite remote electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Zyelyk, Ya. I.; Semeniv, O. V.

    2015-12-01

    The state of the problem of the post-launch calibration of the satellite electro-optic remote sensors and its solutions in Ukraine is analyzed. The database is improved and dynamic services for user interaction with database from the environment of open geographical information system Quantum GIS for information support of calibration activities are created. A dynamic application under QGIS is developed, implementing these services in the direction of the possibility of data entering, editing and extraction from the database, using the technology of object-oriented programming and of modern complex program design patterns. The functional and algorithmic support of this dynamic software and its interface are developed.

  7. JNDMS Task Authorization 2 Report

    DTIC Science & Technology

    2013-10-01

    uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and

  8. Improvements to the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Tauxe, L.; Koppers, A. A. P.; Constable, C.; Jonestrask, L.

    2015-12-01

    The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of data uploading and editing, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Online data editing is now available and the need for proprietary spreadsheet software is therefore entirely negated. The data owner can change values in the database or delete entries through an HTML 5 web interface that resembles typical spreadsheets in behavior and uses. Additive uploading now allows for additions to data sets to be uploaded with a simple drag and drop interface. Searching the database has improved with the addition of more sophisticated search parameters and with the facility to use them in complex combinations. A comprehensive summary view of a search result has been added for increased quick data comprehension while a raw data view is available if one desires to see all data columns as stored in the database. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. MagIC data associated with individual contributions or from online searches may be downloaded in the tab delimited MagIC text file format for susbsequent offline use and analysis. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.

  9. Whither the White Knight: CDROM in Technical Services.

    ERIC Educational Resources Information Center

    Campbell, Brian

    1987-01-01

    Outlines evaluative criteria and compares optical data disk products used in library technical processes, including bibliographic records for cataloging, acquisition databases, and local public access catalogs. An extensive table provides information on specific products, including updates, interfaces, edit screens, installation help, manuals,…

  10. 49 CFR 630.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.3 Definitions. (a) Except as otherwise provided, terms defined in... current editions of the National Transit Database Reporting Manuals and the NTD Uniform System of Accounts... benefits from assistance under 49 U.S.C. 5307 or 5311. Current edition of the National Transit Database...

  11. 49 CFR 630.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.3 Definitions. (a) Except as otherwise provided, terms defined in... current editions of the National Transit Database Reporting Manuals and the NTD Uniform System of Accounts... benefits from assistance under 49 U.S.C. 5307 or 5311. Current edition of the National Transit Database...

  12. 49 CFR 630.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.3 Definitions. (a) Except as otherwise provided, terms defined in... current editions of the National Transit Database Reporting Manuals and the NTD Uniform System of Accounts... benefits from assistance under 49 U.S.C. 5307 or 5311. Current edition of the National Transit Database...

  13. 49 CFR 630.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.3 Definitions. (a) Except as otherwise provided, terms defined in... current editions of the National Transit Database Reporting Manuals and the NTD Uniform System of Accounts... benefits from assistance under 49 U.S.C. 5307 or 5311. Current edition of the National Transit Database...

  14. 49 CFR 630.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.3 Definitions. (a) Except as otherwise provided, terms defined in... current editions of the National Transit Database Reporting Manuals and the NTD Uniform System of Accounts... benefits from assistance under 49 U.S.C. 5307 or 5311. Current edition of the National Transit Database...

  15. Starbase Data Tables: An ASCII Relational Database for Unix

    NASA Astrophysics Data System (ADS)

    Roll, John

    2011-11-01

    Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.

  16. Structures data collection for The National Map using volunteered geographic information

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Korris, Erin M.; Walter, Jennifer L.; Matthews, Greg D.

    2012-01-01

    The U.S. Geological Survey (USGS) has historically sponsored volunteered data collection projects to enhance its topographic paper and digital map products. This report describes one phase of an ongoing project to encourage volunteers to contribute data to The National Map using online editing tools. The USGS recruited students studying geographic information systems (GIS) at the University of Colorado Denver and the University of Denver in the spring of 2011 to add data on structures - manmade features such as schools, hospitals, and libraries - to four quadrangles covering metropolitan Denver. The USGS customized a version of the online Potlatch editor created by the OpenStreetMap project and populated it with 30 structure types drawn from the Geographic Names Information System (GNIS), a USGS database of geographic features. The students corrected the location and attributes of these points and added information on structures that were missing. There were two rounds of quality control. Student volunteers reviewed each point, and an in-house review of each point by the USGS followed. Nine-hundred and thirty-eight structure points were initially downloaded from the USGS database. Editing and quality control resulted in 1,214 structure points that were subsequently added to The National Map. A post-project analysis of the data shows that after student edit and peer review, 92 percent of the points contributed by volunteers met National Map Accuracy Standards for horizontal accuracy. Lessons from this project will be applied to later phases. These include: simplifying editing tasks and the user interfaces, stressing to volunteers the importance of adding structures that are missing, and emphasizing the importance of conforming to editorial guidelines for formatting names and addresses of structures. The next phase of the project will encompass the entire State of Colorado and will allow any citizen to contribute structures data. Volunteers will benefit from this project by engaging with their local geography and contributing to a national resource of topographic information that remains in the public domain for anyone to download.

  17. Wolf Testing: Open Source Testing Software

    NASA Astrophysics Data System (ADS)

    Braasch, P.; Gay, P. L.

    2004-12-01

    Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.

  18. PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-04

    PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less

  19. The New Zealand Tsunami Database: historical and modern records

    NASA Astrophysics Data System (ADS)

    Barberopoulou, A.; Downes, G. L.; Cochran, U. A.; Clark, K.; Scheele, F.

    2016-12-01

    A database of historical (pre-instrumental) and modern (instrumentally recorded)tsunamis that have impacted or been observed in New Zealand has been compiled andpublished online. New Zealand's tectonic setting, astride an obliquely convergenttectonic boundary on the Pacific Rim, means that it is vulnerable to local, regional andcircum-Pacific tsunamis. Despite New Zealand's comparatively short written historicalrecord of c. 200 years there is a wealth of information about the impact of past tsunamis.The New Zealand Tsunami Database currently has 800+ entries that describe >50 highvaliditytsunamis. Sources of historical information include witness reports recorded indiaries, notes, newspapers, books, and photographs. Information on recent events comesfrom tide gauges and other instrumental recordings such as DART® buoys, and media ofgreater variety, for example, video and online surveys. The New Zealand TsunamiDatabase is an ongoing project with information added as further historical records cometo light. Modern tsunamis are also added to the database once the relevant data for anevent has been collated and edited. This paper briefly overviews the procedures and toolsused in the recording and analysis of New Zealand's historical tsunamis, with emphasison database content.

  20. Development of a forestry government agency enterprise GIS system: a disconnected editing approach

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Barber, Brad L.

    2008-10-01

    The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.

  1. FIREDOC users manual, 3rd edition

    NASA Astrophysics Data System (ADS)

    Jason, Nora H.

    1993-12-01

    FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.

  2. MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status

    PubMed Central

    Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D’Elia, D.; Montalvo, A. de; Pinto, B. de; De Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H. V.; Sloof, P.; Saccone, C.

    2000-01-01

    MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces cerevisiae. MitBASE reports all available information from different organisms and from intraspecies variants and mutants. Data have been drawn from the primary databases and from the literature; value adding information has been structured, e.g., editing information on protist mtDNA genomes, pathological information for human mtDNA variants, etc. The different databases, some of which are structured using commercial packages (Microsoft Access, File Maker Pro) while others use a flat-file format, have been integrated under ORACLE. Ad hoc retrieval systems have been devised for some of the above listed databases keeping into account their peculiarities. The database is resident at the EBI and is available at the following site: http://www3.ebi.ac.uk/Research/Mitbase/mitbase.pl . The impact of this project is intended for both basic and applied research. The study of mitochondrial genetic diseases and mitochondrial DNA intraspecies diversity are key topics in several biotechnological fields. The database has been funded within the EU Biotechnology programme. PMID:10592207

  3. Computer Courseware Evaluations. January 1988 to December 1988. Volume VIII.

    ERIC Educational Resources Information Center

    Riome, Carol-Anne, Comp.

    The eighth in a series, this report reviews microcomputer software authorized by the Alberta (Canada) Department of Education from January 1988 through December 1988. This edition provides detailed evaluations of 40 authorized programs for teaching business education, computer literacy, databases, file management, French, information retrieval,…

  4. NICEM Thesaurus. First Edition.

    ERIC Educational Resources Information Center

    National Information Center for Educational Media, Albuquerque, NM.

    This thesaurus, developed by the National Information Center for Educational Media (NICEM), represents an expansion of the NICEM subject headings list, which is designed to provide access to a database of bibliographical records of nonprint, educational media. A preface discusses the issues that led to a revamping of the subject headings,…

  5. My Favorite Things Electronically Speaking, 1997 Edition.

    ERIC Educational Resources Information Center

    Glantz, Shelley

    1997-01-01

    Responding to an informal survey, 96 media specialists named favorite software, CD-ROMs, and online sites. This article lists automation packages, electronic encyclopedias, CD-ROMs, electronic magazine indexes, CD-ROM and online database services, electronic sources of current events, laser disks for grades 6-12, word processing programs for…

  6. Apples in the Apple Library--How One Library Took a Byte.

    ERIC Educational Resources Information Center

    Ertel, Monica

    1983-01-01

    Summarizes automation of a specialized library at Apple Computer, Inc., describing software packages chosen for the following functions: word processing/text editing; cataloging and circulation; reference; and in-house databases. Examples of each function and additional sources of information on software and equipment mentioned in the article are…

  7. Bending the Rules: When Deaf Writers Leave College

    ERIC Educational Resources Information Center

    Biser, Eileen; Rubel, Linda; Toscano, Rose Marie

    2007-01-01

    On-the-job writing of deaf college graduates at all degree levels was investigated. Institutional databases and questionnaires to alumni and employers were the sources for information. Respondents were asked about editing assistance, sources and types of assistance, and perceptions of such assistance by employers and employees. Results of the…

  8. Automated RTOP Management System

    NASA Technical Reports Server (NTRS)

    Hayes, P.

    1984-01-01

    The structure of NASA's Office of Aeronautics and Space Technology electronic information system network from 1983 to 1985 is illustrated. The RTOP automated system takes advantage of existing hardware, software, and expertise, and provides: (1) computerized cover sheet and resources forms; (2) electronic signature and transmission; (3) a data-based information system; (4) graphics; (5) intercenter communications; (6) management information; and (7) text editing. The system is coordinated with Headquarters efforts in codes R,E, and T.

  9. The GermOnline cross-species systems browser provides comprehensive information on genes and gene products relevant for sexual reproduction.

    PubMed

    Gattiker, Alexandre; Niederhauser-Wiederkehr, Christa; Moore, James; Hermida, Leandro; Primig, Michael

    2007-01-01

    We report a novel release of the GermOnline knowledgebase covering genes relevant for the cell cycle, gametogenesis and fertility. GermOnline was extended into a cross-species systems browser including information on DNA sequence annotation, gene expression and the function of gene products. The database covers eight model organisms and Homo sapiens, for which complete genome annotation data are available. The database is now built around a sophisticated genome browser (Ensembl), our own microarray information management and annotation system (MIMAS) used to extensively describe experimental data obtained with high-density oligonucleotide microarrays (GeneChips) and a comprehensive system for online editing of database entries (MediaWiki). The RNA data include results from classical microarrays as well as tiling arrays that yield information on RNA expression levels, transcript start sites and lengths as well as exon composition. Members of the research community are solicited to help GermOnline curators keep database entries on genes and gene products complete and accurate. The database is accessible at http://www.germonline.org/.

  10. A Graphics Facility for Integration, Editing, and Display of Slope, Curvature, and Contours from a Digital Terrain Elevation Database

    DTIC Science & Technology

    1988-06-01

    DETAILED PROBLEM STATEM ENT ......................................................... 23 A . INTRODUCTION...assorted information about the world land masses. When this is done, the problem of storage, manipulation, and display of realistic, dense, and accurate...elevation data becomes a problem of paramount importance. If the data which is stored can be utilized to recreate specific information about certain

  11. The Giardia genome project database.

    PubMed

    McArthur, A G; Morrison, H G; Nixon, J E; Passamaneck, N Q; Kim, U; Hinkle, G; Crocker, M K; Holder, M E; Farr, R; Reich, C I; Olsen, G E; Aley, S B; Adam, R D; Gillin, F D; Sogin, M L

    2000-08-15

    The Giardia genome project database provides an online resource for Giardia lamblia (WB strain, clone C6) genome sequence information. The database includes edited single-pass reads, the results of BLASTX searches, and details of progress towards sequencing the entire 12 million-bp Giardia genome. Pre-sorted BLASTX results can be retrieved based on keyword searches and BLAST searches of the high throughput Giardia data can be initiated from the web site or through NCBI. Descriptions of the genomic DNA libraries, project protocols and summary statistics are also available. Although the Giardia genome project is ongoing, new sequences are made available on a bi-monthly basis to ensure that researchers have access to information that may assist them in the search for genes and their biological function. The current URL of the Giardia genome project database is www.mbl.edu/Giardia.

  12. [Establishment of the database of the 3D facial models for the plastic surgery based on network].

    PubMed

    Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun

    2008-07-01

    To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.

  13. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    PubMed Central

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-01-01

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431

  14. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules-Search Options and Applications in Food Science.

    PubMed

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-12-06

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  15. The HITRAN 2008 Molecular Spectroscopic Database

    NASA Technical Reports Server (NTRS)

    Rothman, Laurence S.; Gordon, Iouli E.; Barbe, Alain; Benner, D. Chris; Bernath, Peter F.; Birk, Manfred; Boudon, V.; Brown, Linda R.; Campargue, Alain; Champion, J.-P.; hide

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are not resolved; individual line parameters and absorption cross sections for bands in the ultra-violet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for forty-two molecules including many of their isotopologues.

  16. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    PubMed

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  17. WGE: a CRISPR database for genome engineering.

    PubMed

    Hodgkins, Alex; Farne, Anna; Perera, Sajith; Grego, Tiago; Parry-Smith, David J; Skarnes, William C; Iyer, Vivek

    2015-09-15

    The rapid development of CRISPR-Cas9 mediated genome editing techniques has given rise to a number of online and stand-alone tools to find and score CRISPR sites for whole genomes. Here we describe the Wellcome Trust Sanger Institute Genome Editing database (WGE), which uses novel methods to compute, visualize and select optimal CRISPR sites in a genome browser environment. The WGE database currently stores single and paired CRISPR sites and pre-calculated off-target information for CRISPRs located in the mouse and human exomes. Scoring and display of off-target sites is simple, and intuitive, and filters can be applied to identify high-quality CRISPR sites rapidly. WGE also provides a tool for the design and display of gene targeting vectors in the same genome browser, along with gene models, protein translation and variation tracks. WGE is open, extensible and can be set up to compute and present CRISPR sites for any genome. The WGE database is freely available at www.sanger.ac.uk/htgt/wge : vvi@sanger.ac.uk or skarnes@sanger.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  18. Multi-National Information Sharing -- Cross Domain Collaborative Information Environment (CDCIE) Solution. Revision 4

    DTIC Science & Technology

    2005-04-12

    Hardware, Database, and Operating System independence using Java • Enterprise-class Architecture using Java2 Enterprise Edition 1.4 • Standards based...portal applications. Compliance with the Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote Portals...authentication and authorization • Portal Standards using Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote

  19. Selective access and editing in a database

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Gawdiak, Yuri O. (Inventor)

    2010-01-01

    Method and system for providing selective access to different portions of a database by different subgroups of database users. Where N users are involved, up to 2.sup.N-1 distinguishable access subgroups in a group space can be formed, where no two access subgroups have the same members. Two or more members of a given access subgroup can edit, substantially simultaneously, a document accessible to each member.

  20. Agricultural Safety and Health: A Resource Guide. Rural Information Center Publication Series, No. 40. Revised Edition.

    ERIC Educational Resources Information Center

    Zimmerman, Joy, Comp.

    This guide lists resource materials that address agricultural occupational injuries and diseases and their prevention. Many of the entries were derived from the AGRICOLA database produced by the National Agricultural Library and include journal articles, books, government reports, training materials, and audiovisual materials. The first section…

  1. Linking NCBI to Wikipedia: a wiki-based approach.

    PubMed

    Page, Roderic D M

    2011-03-31

    The NCBI Taxonomy underpins many bioinformatics and phyloinformatics databases, but by itself provides limited information on the taxa it contains. One readily available source of information on many taxa is Wikipedia. This paper describes iPhylo Linkout, a Semantic wiki that maps taxa in NCBI's taxonomy database onto corresponding pages in Wikipedia. Storing the mapping in a wiki makes it easy to edit, correct, or otherwise annotate the links between NCBI and Wikipedia. The mapping currently comprises some 53,000 taxa, and is available at http://iphylo.org/linkout. The links between NCBI and Wikipedia are also made available to NCBI users through the NCBI LinkOut service.

  2. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    PubMed

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  3. Global Data Toolset (GDT)

    USGS Publications Warehouse

    Cress, Jill J.; Riegle, Jodi L.

    2007-01-01

    According to the United Nations Environment Programme World Conservation Monitoring Centre (UNEP-WCMC) approximately 60 percent of the data contained in the World Database on Protected Areas (WDPA) has missing or incomplete boundary information. As a result, global analyses based on the WDPA can be inaccurate, and professionals responsible for natural resource planning and priority setting must rely on incomplete geospatial data sets. To begin to address this problem the World Data Center for Biodiversity and Ecology, in cooperation with the U. S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC), the National Biological Information Infrastructure (NBII), the Global Earth Observation System, and the Inter-American Biodiversity Information Network (IABIN) sponsored a Protected Area (PA) workshop in Asuncion, Paraguay, in November 2007. The primary goal of this workshop was to train representatives from eight South American countries on the use of the Global Data Toolset (GDT) for reviewing and editing PA data. Use of the GDT will allow PA experts to compare their national data to other data sets, including non-governmental organization (NGO) and WCMC data, in order to highlight inaccuracies or gaps in the data, and then to apply any needed edits, especially in the delineation of the PA boundaries. In addition, familiarizing the participants with the web-enabled GDT will allow them to maintain and improve their data after the workshop. Once data edits have been completed the GDT will also allow the country authorities to perform any required review and validation processing. Once validated, the data can be used to update the global WDPA and IABIN databases, which will enhance analysis on global and regional levels.

  4. A user friendly database for use in ALARA job dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zodiates, A.M.; Willcock, A.

    1995-03-01

    The pressurized water reactor (PWR) design chosen for adoption by Nuclear Electric plc was based on the Westinghouse Standard Nuclear Unit Power Plant (SNUPPS). This design was developed to meet the United Kingdom requirements and these improvements are embodied in the Sizewell B plant which will start commercial operation in 1994. A user-friendly database was developed to assist the station in the dose and ALARP assessments of the work expected to be carried out during station operation and outage. The database stores the information in an easily accessible form and enables updating, editing, retrieval, and searches of the information. Themore » database contains job-related information such as job locations, number of workers required, job times, and the expected plant doserates. It also contains the means to flag job requirements such as requirements for temporary shielding, flushing, scaffolding, etc. Typical uses of the database are envisaged to be in the prediction of occupational doses, the identification of high collective and individual dose jobs, use in ALARP assessments, setting of dose targets, monitoring of dose control performance, and others.« less

  5. Extended Edited Synoptic Cloud Reports from Ships and Land Stations Over the Globe, 1952-2009 (NDP-026C)

    DOE Data Explorer

    Hahn, C. J. [University of Arizona; Warren, S. G. [University of Washington; Eastman, R.

    1999-08-01

    This database contains surface synoptic weather reports for the entire globe, gathered from various available data sets. The reports were processed, edited, and rewritten to provide a single dataset of individual observations of clouds, spanning the 57 years 1952-2008 for ship data and the 39 years 1971-2009 for land station data. In addition to the cloud portion of the synoptic report, each edited report also includes the associated pressure, present weather, wind, air temperature, and dew point (and sea surface temperature over oceans). This data set is called the "Extended Edited Cloud Report Archive" (EECRA). The EECRA is based solely on visual cloud observations from weather stations, reported in the WMO synoptic code (WMO, 1974). Reports must contain cloud-type information to be included in the archive. Past data sources include those from the Fleet Numerical Oceanographic Center (FNOC, 1971-1976) and the National Centers for Environmental Prediction (NCEP, 1977-1996). This update uses data from a new source, the 'Integrated Surface Database' (ISD, 1997-2009; Smith et al., 2011). Our past analyses of the EECRA identified a subset of 5388 weather stations that were determined to produce reliable day and night observations of cloud amount and type. The update contains observations only from this subset of stations. Details concerning processing, previous problems, contents, and comments are available in the archive's original documentation . The EECRA contains about 81 million cloud observations from ships and 380 million from land stations. The data files have been compressed using unix. Unix/linux users can "uncompress" or "gunzip" the files after downloading. If you're interested in the NDP-026C database, then you'll also want to explore its related data products, NDP-026D and NDP-026E.

  6. e23D: database and visualization of A-to-I RNA editing sites mapped to 3D protein structures.

    PubMed

    Solomon, Oz; Eyal, Eran; Amariglio, Ninette; Unger, Ron; Rechavi, Gidi

    2016-07-15

    e23D, a database of A-to-I RNA editing sites from human, mouse and fly mapped to evolutionary related protein 3D structures, is presented. Genomic coordinates of A-to-I RNA editing sites are converted to protein coordinates and mapped onto 3D structures from PDB or theoretical models from ModBase. e23D allows visualization of the protein structure, modeling of recoding events and orientation of the editing with respect to nearby genomic functional sites from databases of disease causing mutations and genomic polymorphism. http://www.sheba-cancer.org.il/e23D CONTACT: oz.solomon@live.biu.ac.il or Eran.Eyal@sheba.health.gov.il. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. ELSI Bibliography: Ethical legal and social implications of the Human Genome Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yesley, M.S.

    This second edition of the ELSI Bibliography provides a current and comprehensive resource for identifying publications on the major topics related to the ethical, legal and social issues (ELSI) of the Human Genome Project. Since the first edition of the ELSI Bibliography was printed last year, new publications and earlier ones identified by additional searching have doubled our computer database of ELSI publications to over 5600 entries. The second edition of the ELSI Bibliography reflects this growth of the underlying computer database. Researchers should note that an extensive collection of publications in the database is available for public use atmore » the General Law Library of Los Alamos National Laboratory (LANL).« less

  8. Evaluation of consumer drug information databases.

    PubMed

    Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J

    1999-01-01

    To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.

  9. The Design of Lexical Database for Indonesian Language

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  10. Network Application Server Using Extensible Mark-Up Language (XML) to Support Distributed Databases and 3D Environments

    DTIC Science & Technology

    2001-12-01

    diides.ncr.disa.mil/xmlreg/user/index.cfm] [ Deitel ] Deitel , H., Deitel , P., Java How to Program 3rd Edition, Prentice Hall, 1999. [DL99...presentation, and data) of information and the programming functionality. The Web framework addressed ability to provide a framework for the distribution...BLANK v ABSTRACT Advances in computer communication technology and an increased awareness of how enhanced information access can lead to improved

  11. GarlicESTdb: an online database and mining tool for garlic EST sequences.

    PubMed

    Kim, Dae-Won; Jung, Tae-Sung; Nam, Seong-Hyeuk; Kwon, Hyuk-Ryul; Kim, Aeri; Chae, Sung-Hwa; Choi, Sang-Haeng; Kim, Dong-Wook; Kim, Ryong Nam; Park, Hong-Seog

    2009-05-18

    Allium sativum., commonly known as garlic, is a species in the onion genus (Allium), which is a large and diverse one containing over 1,250 species. Its close relatives include chives, onion, leek and shallot. Garlic has been used throughout recorded history for culinary, medicinal use and health benefits. Currently, the interest in garlic is highly increasing due to nutritional and pharmaceutical value including high blood pressure and cholesterol, atherosclerosis and cancer. For all that, there are no comprehensive databases available for Expressed Sequence Tags(EST) of garlic for gene discovery and future efforts of genome annotation. That is why we developed a new garlic database and applications to enable comprehensive analysis of garlic gene expression. GarlicESTdb is an integrated database and mining tool for large-scale garlic (Allium sativum) EST sequencing. A total of 21,595 ESTs collected from an in-house cDNA library were used to construct the database. The analysis pipeline is an automated system written in JAVA and consists of the following components: automatic preprocessing of EST reads, assembly of raw sequences, annotation of the assembled sequences, storage of the analyzed information into MySQL databases, and graphic display of all processed data. A web application was implemented with the latest J2EE (Java 2 Platform Enterprise Edition) software technology (JSP/EJB/JavaServlet) for browsing and querying the database, for creation of dynamic web pages on the client side, and for mapping annotated enzymes to KEGG pathways, the AJAX framework was also used partially. The online resources, such as putative annotation, single nucleotide polymorphisms (SNP) and tandem repeat data sets, can be searched by text, explored on the website, searched using BLAST, and downloaded. To archive more significant BLAST results, a curation system was introduced with which biologists can easily edit best-hit annotation information for others to view. The GarlicESTdb web application is freely available at http://garlicdb.kribb.re.kr. GarlicESTdb is the first incorporated online information database of EST sequences isolated from garlic that can be freely accessed and downloaded. It has many useful features for interactive mining of EST contigs and datasets from each library, including curation of annotated information, expression profiling, information retrieval, and summary of statistics of functional annotation. Consequently, the development of GarlicESTdb will provide a crucial contribution to biologists for data-mining and more efficient experimental studies.

  12. Photogrammetric mapping for cadastral land information systems

    NASA Astrophysics Data System (ADS)

    Muzakidis, Panagiotis D.

    The creation of a "clean" digital database is a most important and complex task, upon which the usefulness of a Parcel-Based Land Information System depends. Capturing data by photogrammetric methods for cadastral purposes necessitates the transformation of data into a computer compatible form. Such input requires the encoding, editing and structuring of data. The research is carried out in two phases, the first is concerned with defining the data modelling schemes and the classification of basic data for a parcel-based land information system together with the photogrammetric methods to be adopted to collect these data. The second deals with data editing and data structuring processes in order to produce "clean" information relevant to such a system. Implementation of the proposed system at both the data collection stage and within the data processing stage itself demands a number of flexible criteria to be defined within the methodology. Development of these criteria will include consideration of the cadastral characteristics peculiar to Greece.

  13. The GEISA Spectroscopic Database System in its latest Edition

    NASA Astrophysics Data System (ADS)

    Jacquinet-Husson, N.; Crépeau, L.; Capelle, V.; Scott, N. A.; Armante, R.; Chédin, A.

    2009-04-01

    GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Spectroscopic Information)[1] is a computer-accessible spectroscopic database system, designed to facilitate accurate forward planetary radiative transfer calculations using a line-by-line and layer-by-layer approach. It was initiated in 1976. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI (Infrared Atmospheric Sounding Interferometer on board the METOP European satellite -http://earth-sciences.cnes.fr/IASI/)) through the GEISA/IASI database[2] derived from GEISA. Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model[3] (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and Noveltis with the support of CNES). Also, GEISA is involved in planetary research, i.e.: modelling of Titan's atmosphere, in the comparison with observations performed by Voyager: http://voyager.jpl.nasa.gov/, or by ground-based telescopes, and by the instruments on board the Cassini-Huygens mission: http://www.esa.int/SPECIALS/Cassini-Huygens/index.html. The updated 2008 edition of GEISA (GEISA-08), a system comprising three independent sub-databases devoted, respectively, to line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols, will be described. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's and other planetary atmosphere spectra. GEISA is implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are registered for on line use of GEISA. Refs: 1. Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT, 109, 1043-1059, 2008 2. Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT, 95, 429-67, 2005. 3. Scott, N.A. and A. Chedin, 1981: A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor., 20,556-564.

  14. Patient-oriented cancer information on the internet: a comparison of wikipedia and a professionally maintained database.

    PubMed

    Rajagopalan, Malolan S; Khanna, Vineet K; Leiter, Yaacov; Stott, Meghan; Showalter, Timothy N; Dicker, Adam P; Lawrence, Yaacov R

    2011-09-01

    A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention.

  15. REDIdb 3.0: A Comprehensive Collection of RNA Editing Events in Plant Organellar Genomes.

    PubMed

    Lo Giudice, Claudio; Pesole, Graziano; Picardi, Ernesto

    2018-01-01

    RNA editing is an important epigenetic mechanism by which genome-encoded transcripts are modified by substitutions, insertions and/or deletions. It was first discovered in kinetoplastid protozoa followed by its reporting in a wide range of organisms. In plants, RNA editing occurs mostly by cytidine (C) to uridine (U) conversion in translated regions of organelle mRNAs and tends to modify affected codons restoring evolutionary conserved aminoacid residues. RNA editing has also been described in non-protein coding regions such as group II introns and structural RNAs. Despite its impact on organellar transcriptome and proteome complexity, current primary databases still do not provide a specific field for RNA editing events. To overcome these limitations, we developed REDIdb a specialized database for RNA editing modifications in plant organelles. Hereafter we describe its third release containing more than 26,000 events in a completely novel web interface to accommodate RNA editing in its genomics, biological and evolutionary context through whole genome maps and multiple sequence alignments. REDIdb is freely available at http://srv00.recas.ba.infn.it/redidb/index.html.

  16. Bending the rules: when deaf writers leave college.

    PubMed

    Biser, Eileen; Rubel, Linda; Toscano, Rose Marie

    2007-01-01

    On-the-job writing of deaf college graduates at all degree levels was investigated. Institutional databases and questionnaires to alumni and employers were the sources for information. Respondents were asked about editing assistance, sources and types of assistance, and perceptions of such assistance by employers and employees. Results of the study confirmed that deaf employees did considerable writing regardless of degree or type of job. Their self-reports indicated grammar as the major weakness. Additionally, employers stated that clarity, organization, and spelling were serious writing problems. The study also showed that deaf employees asked for and received editing assistance and that employers were willing to support the improvement of writing skills. Because error-free texts are expected in the workplace and editing assistance is sought and received, postsecondary institutions should mimic these practices by providing copyediting services and instruction in the ethics and practices of working with editors.

  17. Transportation-markings database : international marine aids to navigation. Volume 1, parts C and D

    DOT National Transportation Integrated Search

    1988-01-01

    This monograph is the second edition of Volume I, Parts C and D of what was formerly termed Transportation Markings: A Study in Communication. The first edition of Volume I also included Parts A and B. The original edition was published by University...

  18. 3D visualization of molecular structures in the MOGADOC database

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen

    2010-08-01

    The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.

  19. ExpEdit: a webserver to explore human RNA editing in RNA-Seq experiments.

    PubMed

    Picardi, Ernesto; D'Antonio, Mattia; Carrabino, Danilo; Castrignanò, Tiziana; Pesole, Graziano

    2011-05-01

    ExpEdit is a web application for assessing RNA editing in human at known or user-specified sites supported by transcript data obtained by RNA-Seq experiments. Mapping data (in SAM/BAM format) or directly sequence reads [in FASTQ/short read archive (SRA) format] can be provided as input to carry out a comparative analysis against a large collection of known editing sites collected in DARNED database as well as other user-provided potentially edited positions. Results are shown as dynamic tables containing University of California, Santa Cruz (UCSC) links for a quick examination of the genomic context. ExpEdit is freely available on the web at http://www.caspur.it/ExpEdit/.

  20. Patient-Oriented Cancer Information on the Internet: A Comparison of Wikipedia and a Professionally Maintained Database

    PubMed Central

    Rajagopalan, Malolan S.; Khanna, Vineet K.; Leiter, Yaacov; Stott, Meghan; Showalter, Timothy N.; Dicker, Adam P.; Lawrence, Yaacov R.

    2011-01-01

    Purpose: A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. Methods: For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. Results: Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. Conclusion: Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention. PMID:22211130

  1. Enhancement of Spatial Ability in Girls in a Single-Sex Environment through Spatial Experience and the Impact on Information Seeking

    ERIC Educational Resources Information Center

    Swarlis, Linda L.

    2008-01-01

    The test scores of spatial ability for women lag behind those of men in many spatial tests. On the Mental Rotations Test (MRT), a significant gender gap has existed for over 20 years and continues to exist. High spatial ability has been linked to efficiencies in typical computing tasks including Web and database searching, text editing, and…

  2. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  3. Third millenium ideal gas and condensed phase thermochemical database for combustion (with update from active thermochemical tables).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burcat, A.; Ruscic, B.; Chemistry

    2005-07-29

    The thermochemical database of species involved in combustion processes is and has been available for free use for over 25 years. It was first published in print in 1984, approximately 8 years after it was first assembled, and contained 215 species at the time. This is the 7th printed edition and most likely will be the last one in print in the present format, which involves substantial manual labor. The database currently contains more than 1300 species, specifically organic molecules and radicals, but also inorganic species connected to combustion and air pollution. Since 1991 this database is freely available onmore » the internet, at the Technion-IIT ftp server, and it is continuously expanded and corrected. The database is mirrored daily at an official mirror site, and at random at about a dozen unofficial mirror and 'finger' sites. The present edition contains numerous corrections and many recalculations of data of provisory type by the G3//B3LYP method, a high-accuracy composite ab initio calculation. About 300 species are newly calculated and are not yet published elsewhere. In anticipation of the full coupling, which is under development, the database started incorporating the available (as yet unpublished) values from Active Thermochemical Tables. The electronic version now also contains an XML file of the main database to allow transfer to other formats and ease finding specific information of interest. The database is used by scientists, educators, engineers and students at all levels, dealing primarily with combustion and air pollution, jet engines, rocket propulsion, fireworks, but also by researchers involved in upper atmosphere kinetics, astrophysics, abrasion metallurgy, etc. This introductory article contains explanations of the database and the means to use it, its sources, ways of calculation, and assessments of the accuracy of data.« less

  4. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  5. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders.

    PubMed

    Hamosh, Ada; Scott, Alan F; Amberger, Joanna S; Bocchini, Carol A; McKusick, Victor A

    2005-01-01

    Online Mendelian Inheritance in Man (OMIM) is a comprehensive, authoritative and timely knowledgebase of human genes and genetic disorders compiled to support human genetics research and education and the practice of clinical genetics. Started by Dr Victor A. McKusick as the definitive reference Mendelian Inheritance in Man, OMIM (http://www.ncbi.nlm.nih.gov/omim/) is now distributed electronically by the National Center for Biotechnology Information, where it is integrated with the Entrez suite of databases. Derived from the biomedical literature, OMIM is written and edited at Johns Hopkins University with input from scientists and physicians around the world. Each OMIM entry has a full-text summary of a genetically determined phenotype and/or gene and has numerous links to other genetic databases such as DNA and protein sequence, PubMed references, general and locus-specific mutation databases, HUGO nomenclature, MapViewer, GeneTests, patient support groups and many others. OMIM is an easy and straightforward portal to the burgeoning information in human genetics.

  6. CrisprGE: a central hub of CRISPR/Cas-based genome editing.

    PubMed

    Kaur, Karambir; Tandon, Himani; Gupta, Amit Kumar; Kumar, Manoj

    2015-01-01

    CRISPR system is a powerful defense mechanism in bacteria and archaea to provide immunity against viruses. Recently, this process found a new application in intended targeting of the genomes. CRISPR-mediated genome editing is performed by two main components namely single guide RNA and Cas9 protein. Despite the enormous data generated in this area, there is a dearth of high throughput resource. Therefore, we have developed CrisprGE, a central hub of CRISPR/Cas-based genome editing. Presently, this database holds a total of 4680 entries of 223 unique genes from 32 model and other organisms. It encompasses information about the organism, gene, target gene sequences, genetic modification, modifications length, genome editing efficiency, cell line, assay, etc. This depository is developed using the open source LAMP (Linux Apache MYSQL PHP) server. User-friendly browsing, searching facility is integrated for easy data retrieval. It also includes useful tools like BLAST CrisprGE, BLAST NTdb and CRISPR Mapper. Considering potential utilities of CRISPR in the vast area of biology and therapeutics, we foresee this platform as an assistance to accelerate research in the burgeoning field of genome engineering. © The Author(s) 2015. Published by Oxford University Press.

  7. [International bibliographic databases--Current Contents on disk and in FTP format (Internet): presentation and guide].

    PubMed

    Bloch-Mouillet, E

    1999-01-01

    This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.

  8. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  9. Manual editing of automatically recorded data in an anesthesia information management system.

    PubMed

    Wax, David B; Beilin, Yaakov; Hossain, Sabera; Lin, Hung-Mo; Reich, David L

    2008-11-01

    Anesthesia information management systems allow automatic recording of physiologic and anesthetic data. The authors investigated the prevalence of such data modification in an academic medical center. The authors queried their anesthesia information management system database of anesthetics performed in 2006 and tabulated the counts of data points for automatically recorded physiologic and anesthetic parameters as well as the subset of those data that were manually invalidated by clinicians (both with and without alternate values manually appended). Patient, practitioner, data source, and timing characteristics of recorded values were also extracted to determine their associations with editing of various parameters in the anesthesia information management system record. A total of 29,491 cases were analyzed, 19% of which had one or more data points manually invalidated. Among 58 attending anesthesiologists, each invalidated data in a median of 7% of their cases when working as a sole practitioner. A minority of invalidated values were manually appended with alternate values. Pulse rate, blood pressure, and pulse oximetry were the most commonly invalidated parameters. Data invalidation usually resulted in a decrease in parameter variance. Factors independently associated with invalidation included extreme physiologic values, American Society of Anesthesiologists physical status classification, emergency status, timing (phase of the procedure/anesthetic), presence of an intraarterial catheter, resident or certified registered nurse anesthetist involvement, and procedure duration. Editing of physiologic data automatically recorded in an anesthesia information management system is a common practice and results in decreased variability of intraoperative data. Further investigation may clarify the reasons for and consequences of this behavior.

  10. [Relevance of the hemovigilance regional database for the shared medical file identity server].

    PubMed

    Doly, A; Fressy, P; Garraud, O

    2008-11-01

    The French Health Products Safety Agency coordinates the national initiative of computerization of blood products traceability within regional blood banks and public and private hospitals. The Auvergne-Loire Regional French Blood Service, based in Saint-Etienne, together with a number of public hospitals set up a transfusion data network named EDITAL. After four years of progressive implementation and experimentation, a software enabling standardized data exchange has built up a regional nominative database, endorsed by the Traceability Computerization National Committee in 2004. This database now provides secured web access to a regional transfusion history enabling biologists and all hospital and family practitioners to take in charge the patient follow-up. By running independently from the softwares of its partners, EDITAL database provides reference for the regional identity server.

  11. An evaluation of information retrieval accuracy with simulated OCR output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, W.B.; Harding, S.M.; Taghva, K.

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents canmore » result in significant degradation.« less

  12. The table of isotopes-8th edition and beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firestone, R.B.

    A new edition of the Table of Isotopes has been published this year by John Wiley and Sons, Inc. This edition is the eighth in a series started by Glenn T. Seaborg in 1940. The two-volume, 3168-page, cloth-bound edition is twice the size of the previous edition published in 1978. It contains nuclear structure and decay data, based mainly on the Evaluated Nuclear Structure Data File (ENSDF), for >3100 isotopes and isomers. Approximately 24000 references are cited, and the appendices have been updated and extended. The book is packaged with an interactive CD-ROM that contains the Table of Isotopes inmore » Adobe Acrobat Portable Document Format for convenient viewing on personal computer (PC) and UNIX workstations. The CD-ROM version contains a chart of the nuclides graphical index and separate indices organized for radioisotope users and nuclear structure physicists. More than 100000 hypertext links are provided to move the user quickly through related information free from the limitations of page size. Complete references with keyword abstracts are provided. The CD-ROM also contains the Table of Super-deformed Nuclear Bands and Fission Isomers; Tables of Atoms, Atomic Nuclei, and Subatomic Particles by Ivan P. Selinov; the ENSDF and nuclear structure reference (NSR) databases; the ENSDF manual by Jagdish K. Tuli; and Abode Acrobat Reader software.« less

  13. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases

    PubMed Central

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-01-01

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174

  14. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-03-19

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.

  15. Psychology's struggle for existence: Second edition, 1913.

    PubMed

    Wundt, Wilhelm; Lamiell, James T

    2013-08-01

    Presents an English translation of Wilhelm Wundt's Psychology's struggle for existence: Second edition, 1913, by James T. Lamiell in August, 2012. In his essay, Wundt advised against the impending divorce of psychology from philosophy. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  16. [The biomedical periodicals of Hungarian editions--historical overview].

    PubMed

    Berhidi, Anna; Geges, József; Vasas, Lívia

    2006-03-12

    The majority of Hungarian scientific results are published in international periodicals in foreign languages. Yet the publications in Hungarian scientific periodicals also should not be ignored. This study analyses biomedical periodicals of Hungarian edition from different points of view. Based on different databases a list of titles consisting of 119 items resulted, which contains both the core and the peripheral journals of the biomedical field. These periodicals were analysed empirically, one by one: checking out the titles. 13 of the titles are ceased, among the rest 106 Hungarian scientific journals 10 are published in English language. From the remaining majority of Hungarian language and publishing only a few show up in international databases. Although quarter of the Hungarian biomedical journals meet the requirements, which means they could be represented in international databases, these periodicals are not indexed. 42 biomedical periodicals are available online. Although quarter of these journals come with restricted access. 2/3 of the Hungarian biomedical journals have detailed instructions to authors. These instructions inform the publishing doctors and researchers of the requirements of a biomedical periodical. The increasing number of Hungarian biomedical journals published is welcome news. But it would be important for quality publications which are cited a lot to appear in the Hungarian journals. The more publications are cited, the more journals and authors gain in prestige on home and international level.

  17. Applications of Technology to CAS Data-Base Production.

    ERIC Educational Resources Information Center

    Weisgerber, David W.

    1984-01-01

    Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…

  18. Scope, completeness, and accuracy of drug information in Wikipedia.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Boulos, Maged N Kamel; Dzenowagis, Joan H

    2008-12-01

    With the advent of Web 2.0 technologies, user-edited online resources such as Wikipedia are increasingly tapped for information. However, there is little research on the quality of health information found in Wikipedia. To compare the scope, completeness, and accuracy of drug information in Wikipedia with that of a free, online, traditionally edited database (Medscape Drug Reference [MDR]). Wikipedia and MDR were assessed on 8 categories of drug information. Questions were constructed and answers were verified with authoritative resources. Wikipedia and MDR were evaluated according to scope (breadth of coverage) and completeness. Accuracy was tracked by factual errors and errors of omission. Descriptive statistics were used to summarize the components. Fisher's exact test was used to compare scope and paired Student's t-test was used to compare current results in Wikipedia with entries 90 days prior to the current access. Wikipedia was able to answer significantly fewer drug information questions (40.0%) compared with MDR (82.5%; p < 0.001). Wikipedia performed poorly regarding information on dosing, with a score of 0% versus the MDR score of 90.0%. Answers found in Wikipedia were 76.0% complete, while MDR provided answers that were 95.5% complete; overall, Wikipedia answers were less complete than those in Medscape (p < 0.001). No factual errors were found in Wikipedia, whereas 4 answers in Medscape conflicted with the answer key; errors of omission were higher in Wikipedia (n = 48) than in MDR (n = 14). There was a marked improvement in Wikipedia over time, as current entries were superior to those 90 days prior (p = 0.024). Wikipedia has a more narrow scope, is less complete, and has more errors of omission than the comparator database. Wikipedia may be a useful point of engagement for consumers, but is not authoritative and should only be a supplemental source of drug information.

  19. Advancements in web-database applications for rabies surveillance.

    PubMed

    Rees, Erin E; Gendron, Bruno; Lelièvre, Frédérick; Coté, Nathalie; Bélanger, Denise

    2011-08-02

    Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include (1) automatic integration of multi-agency data and diagnostic results on a daily basis; (2) a web-based data editing interface that enables authorized users to add, edit and extract data; and (3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response.

  20. Advancements in web-database applications for rabies surveillance

    PubMed Central

    2011-01-01

    Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1) automatic integration of multi-agency data and diagnostic results on a daily basis; 2) a web-based data editing interface that enables authorized users to add, edit and extract data; and 3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response. PMID:21810215

  1. Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV)

    PubMed Central

    Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B

    2018-01-01

    Abstract The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. PMID:29040670

  2. Kinetic Modeling using BioPAX ontology

    PubMed Central

    Ruebenacker, Oliver; Moraru, Ion. I.; Schaff, James C.; Blinov, Michael L.

    2010-01-01

    Thousands of biochemical interactions are available for download from curated databases such as Reactome, Pathway Interaction Database and other sources in the Biological Pathways Exchange (BioPAX) format. However, the BioPAX ontology does not encode the necessary information for kinetic modeling and simulation. The current standard for kinetic modeling is the System Biology Markup Language (SBML), but only a small number of models are available in SBML format in public repositories. Additionally, reusing and merging SBML models presents a significant challenge, because often each element has a value only in the context of the given model, and information encoding biological meaning is absent. We describe a software system that enables a variety of operations facilitating the use of BioPAX data to create kinetic models that can be visualized, edited, and simulated using the Virtual Cell (VCell), including improved conversion to SBML (for use with other simulation tools that support this format). PMID:20862270

  3. [Application characteristics and situation analysis of volatile oils in database of Chinese patent medicine].

    PubMed

    Wang, Sai-Jun; Wu, Zhen-Feng; Yang, Ming; Wang, Ya-Qi; Hu, Peng-Yi; Jie, Xiao-Lu; Han, Fei; Wang, Fang

    2014-09-01

    Aromatic traditional Chinese medicines have a long history in China, with wide varieties. Volatile oils are active ingredients extracted from aromatic herbal medicines, which usually contain tens or hundreds of ingredients, with many biological activities. Therefore, volatile oils are often used in combined prescriptions and made into various efficient preparations for oral administration or external use. Based on the sources from the database of Newly Edited National Chinese Traditional Patent Medicines (the second edition), the author selected 266 Chinese patent medicines containing volatile oils in this paper, and then established an information sheet covering such items as name, dosage, dosage form, specification and usage, and main functions. Subsequently, on the basis of the multidisciplinary knowledge of pharmaceutics, traditional Chinese pharmacology and basic theory of traditional Chinese medicine, efforts were also made in the statistics of the dosage form and usage, variety of volatile oils and main functions, as well as the status analysis on volatile oils in terms of the dosage form development, prescription development, drug instruction and quality control, in order to lay a foundation for the further exploration of the market development situations of volatile oils and the future development orientation.

  4. Project Manager’s Guide to the Scientific and Technical Information (STINFO) Program and Technical Publications Process

    DTIC Science & Technology

    1993-12-01

    Iaporta .. y be definitive for the tubjoct proaentod, exploratory in natura, or an evaluation of critical Aubayato• or of technical problema , 4...International Security 9 Social and Natural Science Studies Field 41 Edit: (Type 3) -Entry of an invalid code when Performance Type is "C" or "M" will...analysis SF Foreign area social science research SP Foreign area policy planAing research BF Identifies databases with data on foreign forces or

  5. Learning from research on the information behaviour of healthcare professionals: a review of the literature 2004-2008 with a focus on emotion.

    PubMed

    Fourie, Ina

    2009-09-01

    A review, focusing on emotion, was conducted of reported studies on the information behaviour of healthcare professionals (2004-2008). Findings were intended to offer guidelines on information services and information literacy training, to note gaps in research and to raise research interest. Databases were searched for literature published from January 2004 to December 2008 and indexed on eric, Library and Information Science Abstracts, medline, PsycINFO, Social Services Abstracts, Sociological Abstracts, Health Source: Nursing/Academic Edition; Library, Information Science & Technology Abstracts; Psychology and Behavioral Sciences Collection; Social Work Abstracts; SocINDEX with Full Text; SPORTDiscus; cinhal; and the ISI Web of Knowledge databases. Key journals were manually scanned and citations followed. Literature was included if reporting on issues concerning emotion. Emotion in information behaviour in healthcare contexts is scantily addressed. This review, however, offers some insight into the difficulty in identifying and expressing information needs; sense making and the need to fill knowledge gaps; uncertainty; personality and coping skills; motivation to seeking information; emotional experiences during information seeking; self-confidence and attitude; emotional factors in the selection of information channels; and seeking information for psychological or emotional reasons. Suggestions following findings, address information literacy programs, information services and research gaps.

  6. The GLAS editing procedures for the FGGE level II-B data collected during SOP-1 and 2

    NASA Technical Reports Server (NTRS)

    Baker, W.; Edelmann, D.; Carus, H.

    1981-01-01

    The modifications made to the FGGE Level II-b data are discussed and the FORTRAN program developed to perform the modifications is described. It is suggested that the edited database is the most accurate one available for FGGE SOP-1 and 2.

  7. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders.

    PubMed

    Hamosh, Ada; Scott, Alan F; Amberger, Joanna; Bocchini, Carol; Valle, David; McKusick, Victor A

    2002-01-01

    Online Mendelian Inheritance in Man (OMIM) is a comprehensive, authoritative and timely knowledgebase of human genes and genetic disorders compiled to support research and education in human genomics and the practice of clinical genetics. Started by Dr Victor A. McKusick as the definitive reference Mendelian Inheritance in Man, OMIM (www.ncbi.nlm.nih.gov/omim) is now distributed electronically by the National Center for Biotechnology Information (NCBI), where it is integrated with the Entrez suite of databases. Derived from the biomedical literature, OMIM is written and edited at Johns Hopkins University with input from scientists and physicians around the world. Each OMIM entry has a full-text summary of a genetically determined phenotype and/or gene and has numerous links to other genetic databases such as DNA and protein sequence, PubMed references, general and locus-specific mutation databases, approved gene nomenclature, and the highly detailed mapviewer, as well as patient support groups and many others. OMIM is an easy and straightforward portal to the burgeoning information in human genetics.

  8. A mobile trauma database with charge capture.

    PubMed

    Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin

    2005-11-01

    Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.

  9. Contaminated sediments database for the Gulf of Maine

    USGS Publications Warehouse

    Buchholtz ten Brink, Marilyn R.; Manheim, F.T.; Mecray, E.L.; Hastings, M.E.; Currence, J.M.; Farrington, J.W.; Jones, S.H.; Larsen, P.F.; Tripp, B.W.; Wallace, G.T.; Ward, L.G.; Fredette, T.J.; Liebman, M.L.; Smith Leo, W.

    2002-01-01

    Bottom sediments in the Gulf of Maine and its estuaries have accumulated pollutants of many types, including metals and organic compounds of agricultural, industrial, and household derivation. Much analytical and descriptive data has been obtained on these sediments over the past decades, but only a small effort had been made, prior to this project, to compile and edit the published and unpublished data in forms suitable for a variety of users. The Contaminated Sediments Database for the Gulf of Maine provides a compilation and synthesis of existing data to help establish the environmental status of our coastal sediments and the transport paths and fate of contaminants in this region. This information, in turn, forms one of the essential bases for developing successful remediation and resource management policies.

  10. Map showing geologic terranes of the Hailey 1 degree x 2 degrees quadrangle and the western part of the Idaho Falls 1 degree x 2 degrees quadrangle, south-central Idaho

    USGS Publications Warehouse

    Worl, R.G.; Johnson, K.M.

    1995-01-01

    The paper version of Map Showing Geologic Terranes of the Hailey 1x2 Quadrangle and the western part of the Idaho Falls 1x2 Quadrangle, south-central Idaho was compiled by Ron Worl and Kate Johnson in 1995. The plate was compiled on a 1:250,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a geographic information system database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  11. Interoperability, Data Control and Battlespace Visualization using XML, XSLT and X3D

    DTIC Science & Technology

    2003-09-01

    26 Rosenthal, Arnon, Seligman , Len and Costello, Roger, XML, Databases, and Interoperability, Federal Database Colloquium, AFCEA, San Diego...79 Rosenthal, Arnon, Seligman , Len and Costello, Roger, “XML, Databases, and Interoperability”, Federal Database Colloquium, AFCEA, San Diego, 1999... Linda , Mastering XML, Premium Edition, SYBEX, 2001 Wooldridge, Michael , An Introduction to MultiAgent Systems, Wiley, 2002 PAPERS Abernathy, M

  12. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.

    PubMed

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-12-01

    Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/

  13. Aided generation of search interfaces to astronomical archives

    NASA Astrophysics Data System (ADS)

    Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.

  14. Plant rDNA database: update and new features.

    PubMed

    Garcia, Sònia; Gálvez, Francisco; Gras, Airy; Kovařík, Aleš; Garnatje, Teresa

    2014-01-01

    The Plant rDNA database (www.plantrdnadatabase.com) is an open access online resource providing detailed information on numbers, structures and positions of 5S and 18S-5.8S-26S (35S) ribosomal DNA loci. The data have been obtained from >600 publications on plant molecular cytogenetics, mostly based on fluorescent in situ hybridization (FISH). This edition of the database contains information on 1609 species derived from 2839 records, which means an expansion of 55.76 and 94.45%, respectively. It holds the data for angiosperms, gymnosperms, bryophytes and pteridophytes available as of June 2013. Information from publications reporting data for a single rDNA (either 5S or 35S alone) and annotation regarding transcriptional activity of 35S loci now appears in the database. Preliminary analyses suggest greater variability in the number of rDNA loci in gymnosperms than in angiosperms. New applications provide ideograms of the species showing the positions of rDNA loci as well as a visual representation of their genome sizes. We have also introduced other features to boost the usability of the Web interface, such as an application for convenient data export and a new section with rDNA-FISH-related information (mostly detailing protocols and reagents). In addition, we upgraded and/or proofread tabs and links and modified the website for a more dynamic appearance. This manuscript provides a synopsis of these changes and developments. http://www.plantrdnadatabase.com. © The Author(s) 2014. Published by Oxford University Press.

  15. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  16. Gas Metal Arc Welding and Flux-Cored Arc Welding. Third Edition. Teacher Edition [and] Student Edition [and] Student Workbook.

    ERIC Educational Resources Information Center

    Knapp, John; Harper, Eddie

    This packet, containing a teacher's edition, a student edition, and a student workbook, introduces students to high deposition welding and processes for "shielding" a weld. In addition to general information, the teacher edition consists of introductory pages and teacher pages, as well as unit information that corresponds to the…

  17. Emissions & Generation Resource Integrated Database (eGRID), eGRID2012

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes. eGRID2012 Version 1.0 is the eighth edition of eGRID, which contains the complete release of year 2009 data, as well as year 2007, 2005, and 2004 data. For year 2009 data, all the data are contained in a single Microsoft Excel workbook, which contains boiler, generator, plant, state, power control area, eGRID subregion, NERC region, U.S. total and grid gross loss factor tabs. Full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are also released in this edition. The fourth edition of eGRID, eGRID2002 Version 2.01, containing year 1996 through 2000 data is located on the eGRID Archive page (http://www.epa.gov/cleanenergy/energy-resources/egrid/archive.html). The current edition of eGRID and the archived edition of eGRID contain the following years of data: 1996 - 2000, 2004, 2005, and 2007. eGRID has no other years of data.

  18. The HITRAN2016 molecular spectroscopic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, I. E.; Rothman, L. S.; Hill, C.

    This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is comprised of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 200 additional significant molecules have been added to the database.« less

  19. Marketing/Planning Library and Information Services. Second Edition.

    ERIC Educational Resources Information Center

    Weingand, Darlene E.

    In the first edition of this book, the concepts of marketing and planning library and information services were presented as effective managerial strategies. Several paragraphs from the introduction to the first edition are reproduced, with author commentary, in this edition as an affirmation that the message is still true. In this second edition,…

  20. Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV).

    PubMed

    Lefkowitz, Elliot J; Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B

    2018-01-04

    The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. The Research Potential of the Electronic OED Database at the University of Waterloo: A Case Study.

    ERIC Educational Resources Information Center

    Berg, Donna Lee

    1991-01-01

    Discusses the history and structure of the online database of the second edition of the Oxford English Dictionary (OED) and the software tools developed at the University of Waterloo to manipulate the unusually complex database. Four sample searches that indicate some types of problems that might be encountered are appended. (DB)

  2. The 2015 edition of the GEISA spectroscopic database

    NASA Astrophysics Data System (ADS)

    Jacquinet-Husson, N.; Armante, R.; Scott, N. A.; Chédin, A.; Crépeau, L.; Boutammine, C.; Bouhdaoui, A.; Crevoisier, C.; Capelle, V.; Boonne, C.; Poulet-Crovisier, N.; Barbe, A.; Chris Benner, D.; Boudon, V.; Brown, L. R.; Buldyreva, J.; Campargue, A.; Coudert, L. H.; Devi, V. M.; Down, M. J.; Drouin, B. J.; Fayt, A.; Fittschen, C.; Flaud, J.-M.; Gamache, R. R.; Harrison, J. J.; Hill, C.; Hodnebrog, Ø.; Hu, S.-M.; Jacquemart, D.; Jolly, A.; Jiménez, E.; Lavrentieva, N. N.; Liu, A.-W.; Lodi, L.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A.; Nielsen, C. J.; Orphal, J.; Perevalov, V. I.; Perrin, A.; Polovtseva, E.; Predoi-Cross, A.; Rotger, M.; Ruth, A. A.; Yu, S. S.; Sung, K.; Tashkun, S. A.; Tennyson, J.; Tyuterev, Vl. G.; Vander Auwera, J.; Voronin, B. A.; Makie, A.

    2016-09-01

    The GEISA database (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) has been developed and maintained by the http://ara.abct.lmd.polytechnique.fr. The "line parameters database" contains 52 molecular species (118 isotopologues) and transitions in the spectral range from 10-6 to 35,877.031 cm-1, representing 5,067,351 entries, against 3,794,297 in GEISA-2011. Among the previously existing molecules, 20 molecular species have been updated. A new molecule (SO3) has been added. HDO, isotopologue of H2O, is now identified as an independent molecular species. Seven new isotopologues have been added to the GEISA-2015 database. The "cross section sub-database" has been enriched by the addition of 43 new molecular species in its infrared part, 4 molecules (ethane, propane, acetone, acetonitrile) are also updated; they represent 3% of the update. A new section is added, in the near-infrared spectral region, involving 7 molecular species: CH3CN, CH3I, CH3O2, H2CO, HO2, HONO, NH3. The "microphysical and optical properties of atmospheric aerosols sub-database" has been updated for the first time since 2003. It contains more than 40 species originating from NCAR and 20 from the http://eodg.atm.ox.ac.uk/ARIA/introduction_nocol.html. As for the previous versions, this new release of GEISA and associated management software facilities are implemented and freely accessible on the http://cds-espri.ipsl.fr/etherTypo/?id=950.

  3. DNAAlignEditor: DNA alignment editor tool

    PubMed Central

    Sanchez-Villeda, Hector; Schroeder, Steven; Flint-Garcia, Sherry; Guill, Katherine E; Yamasaki, Masanori; McMullen, Michael D

    2008-01-01

    Background With advances in DNA re-sequencing methods and Next-Generation parallel sequencing approaches, there has been a large increase in genomic efforts to define and analyze the sequence variability present among individuals within a species. For very polymorphic species such as maize, this has lead to a need for intuitive, user-friendly software that aids the biologist, often with naïve programming capability, in tracking, editing, displaying, and exporting multiple individual sequence alignments. To fill this need we have developed a novel DNA alignment editor. Results We have generated a nucleotide sequence alignment editor (DNAAlignEditor) that provides an intuitive, user-friendly interface for manual editing of multiple sequence alignments with functions for input, editing, and output of sequence alignments. The color-coding of nucleotide identity and the display of associated quality score aids in the manual alignment editing process. DNAAlignEditor works as a client/server tool having two main components: a relational database that collects the processed alignments and a user interface connected to database through universal data access connectivity drivers. DNAAlignEditor can be used either as a stand-alone application or as a network application with multiple users concurrently connected. Conclusion We anticipate that this software will be of general interest to biologists and population genetics in editing DNA sequence alignments and analyzing natural sequence variation regardless of species, and will be particularly useful for manual alignment editing of sequences in species with high levels of polymorphism. PMID:18366684

  4. The construction of the spatio-temporal database of the ancient Silk Road within Xinjiang province during the Han and Tang dynasties

    NASA Astrophysics Data System (ADS)

    Bi, Jiantao; Luo, Guilin; Wang, Xingxing; Zhu, Zuojia

    2014-03-01

    As the bridge over the Chinese and Western civilization, the ancient Silk Road has made a huge contribution to cultural, economic, political exchanges between China and western countries. In this paper, we treated the historical period of Western Han Dynasty, Eastern Han Dynasty and Tang Dynasty as the research time domain, and the Western Regions' countries that were existed along the Silk Road at the mean time as the research spatial domain. Then we imported these data into the SQL Server database we constructed, from which we could either query the attribute information such as population, military force, the era of the Central Plains empire, the significant events taking place in the country and some related attribute information of these events like the happened calendar year in addition to some related spatial information such as the present location, the coordinates of the capital and the territory by inputting the name of the Western countries. At the same time we could query the significant events, government institution in Central Plains and the existent Western countries at the mean time by inputting the calendar year. Based on the database, associated with GIS, RS, Flex, C# and other related information technology and network technology, we could not only browsing, searching and editing the information of the ancient Silk Road in Xinjiang Province during the Han and Tang Dynasties, but preliminary analysing as well. This is the combination of archaeology and modern information technology, and the database could also be a reference to further study, research and practice in the related fields in the future.

  5. The GEISA Spectroscopic Database as a Tool for Hyperspectral Earth' Tropospheric Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain

    2010-05-01

    Remote sensing of the terrestrial atmosphere has advanced significantly in recent years, and this has placed greater demands on the compilations in terms of accuracy, additional species, and spectral coverage. The successful performances of the new generation of hyperspectral Earth' atmospheric sounders like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, which have a better vertical resolution and accuracy, compared to the previous satellite infrared vertical sounders, depend ultimately on the accuracy to which the spectroscopic parameters of the optically active gases are known, since they constitute an essential input to the forward radiative transfer models that are used to interpret their observations. In this context, the GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is continuously developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France). The updated 2009 edition of GEISA (GEISA-09)is a system comprising three independent sub-databases devoted respectively to: line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. In this edition, the contents of which will be summarized, 50 molecules are involved in the line transition parameters sub-database, including 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI through the GEISA/IASI database derived from GEISA (2). Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Special emphasize will be given to the description of GEISA/IASI. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's atmosphere spectra. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are registered for on line use of GEISA. Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT 109 (2008) 1043-1059. (2) Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT 95 (2005)429-467. (3) Scott, N.A. and A. Chedin. A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor. 20 (1981)556-564.

  6. Scale-Independent Relational Query Processing

    DTIC Science & Technology

    2013-10-04

    source options are also available, including Postgresql, MySQL , and SQLite. These mod- ern relational databases are generally very complex software systems...and Their Application to Data Stream Management. IGI Global, 2010. [68] George Reese. Database Programming with JDBC and Java , Second Edition. Ed. by

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yesley, M.S.; Ossorio, P.N.

    This report updates and expands the second edition of the ELSI Bibliography, published in 1993. The Bibliography and Supplement provides a comprehensive resource for identifying publications on the major topics related to the ethical, legal and social issues (ELSI) of the Human Genome Project. The Bibliography and Supplement are extracted from a database compiled at Los Alamos National Laboratory with the support of the Office of Energy Research, US Department of Energy. The second edition of the ELSI Bibliography was dated May 1993 but included publications added to the database until fall 1993. This Supplement reflects approximately 1,000 entries addedmore » to the database during the past year, bringing the total to approximately 7,000 entries. More than half of the new entries were published in the last year, and the remainder are earlier publications not previously included in the database. Most of the new entries were published in the academic and professional literature. The remainder are press reports from newspapers of record and scientific journals. The topical listing of the second edition has been followed in the Supplement, with a few changes. The topics of Cystic Fibrosis, Huntington`s Disease, and Sickle Cell Anemia have been combined in a single topic, Disorders. Also, all the entries published in the past year are included in a new topic, Publications: September 1993--September 1994, which provides a comprehensive view of recent reporting and commentary on the science and ELSI of genetics.« less

  8. A new edition of the Mars 1:5,000,000 map series

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Mcewen, Alfred S.; Wu, Sherman S. C.

    1991-01-01

    A new edition of the Mars 1:5,000,000 scale map series is in preparation. Two sheets will be made for each quadrangle. Sheet one will show shaded relief, contours, and nomenclature. Sheet 2 will be a full-color photomosaic prepared on the Mars digital image model (MDIM) base co-registered with the Mars low-resolution color database. The latter will have an abbreviated graticule (latitude/longitude ticks only) and no other line overprint. The four major databases used to assemble this series are now virtually complete. These are: (1) Viking-revised shaded relief maps at 1:5,000,000 scale; (2) contour maps at 1:2,000,000 scale; (3) the Mars digital image model; and (4) a color image mosaic of Mars. Together, these databases form the most complete planetwide cartographic definition of Mars that can be compiled with existing data. The new edition will supersede the published Mars 1:5,000,000 scale maps, including the original shaded relief and topographic maps made primarily with Mariner 9 data and the Viking-revised shaded relief and controlled photomosaic series. Publication of the new series will begin in late 1991 or early 1992, and it should be completed in two years.

  9. Development and application of a database of food ingredient fraud and economically motivated adulteration from 1980 to 2010.

    PubMed

    Moore, Jeffrey C; Spink, John; Lipp, Markus

    2012-04-01

    Food ingredient fraud and economically motivated adulteration are emerging risks, but a comprehensive compilation of information about known problematic ingredients and detection methods does not currently exist. The objectives of this research were to collect such information from publicly available articles in scholarly journals and general media, organize into a database, and review and analyze the data to identify trends. The results summarized are a database that will be published in the US Pharmacopeial Convention's Food Chemicals Codex, 8th edition, and includes 1305 records, including 1000 records with analytical methods collected from 677 references. Olive oil, milk, honey, and saffron were the most common targets for adulteration reported in scholarly journals, and potentially harmful issues identified include spices diluted with lead chromate and lead tetraoxide, substitution of Chinese star anise with toxic Japanese star anise, and melamine adulteration of high protein content foods. High-performance liquid chromatography and infrared spectroscopy were the most common analytical detection procedures, and chemometrics data analysis was used in a large number of reports. Future expansion of this database will include additional publically available articles published before 1980 and in other languages, as well as data outside the public domain. The authors recommend in-depth analyses of individual incidents. This report describes the development and application of a database of food ingredient fraud issues from publicly available references. The database provides baseline information and data useful to governments, agencies, and individual companies assessing the risks of specific products produced in specific regions as well as products distributed and sold in other regions. In addition, the report describes current analytical technologies for detecting food fraud and identifies trends and developments. © 2012 US Pharmacupia Journal of Food Science © 2012 Institute of Food Technologistsreg;

  10. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    PubMed Central

    Azzato, Elizabeth M.; Morrissette, Jennifer J. D.; Halbiger, Regina D.; Bagg, Adam; Daber, Robert D.

    2014-01-01

    Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS). We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS. PMID:25250187

  11. Profiling RNA editing in human tissues: towards the inosinome Atlas

    PubMed Central

    Picardi, Ernesto; Manzari, Caterina; Mastropasqua, Francesca; Aiello, Italia; D’Erchia, Anna Maria; Pesole, Graziano

    2015-01-01

    Adenine to Inosine RNA editing is a widespread co- and post-transcriptional mechanism mediated by ADAR enzymes acting on double stranded RNA. It has a plethora of biological effects, appears to be particularly pervasive in humans with respect to other mammals, and is implicated in a number of diverse human pathologies. Here we present the first human inosinome atlas comprising 3,041,422 A-to-I events identified in six tissues from three healthy individuals. Matched directional total-RNA-Seq and whole genome sequence datasets were generated and analysed within a dedicated computational framework, also capable of detecting hyper-edited reads. Inosinome profiles are tissue specific and edited gene sets consistently show enrichment of genes involved in neurological disorders and cancer. Overall frequency of editing also varies, but is strongly correlated with ADAR expression levels. The inosinome database is available at: http://srv00.ibbe.cnr.it/editing/. PMID:26449202

  12. Benford's Law and articles of scientific journals: comparison of JCR® and Scopus data.

    PubMed

    Alves, Alexandre Donizeti; Yanasse, Horacio Hideki; Soma, Nei Yoshihiro

    2014-01-01

    Benford's Law is a logarithmic probability distribution function used to predict the distribution of the first significant digits in numerical data. This paper presents the results of a study of the distribution of the first significant digits of the number of articles published of journals indexed in the JCR ® Sciences and Social Sciences Editions from 2007 to 2011. The data of these journals were also analyzed by the country of origin and the journal's category. Results considering the number of articles published informed by Scopus are also presented. Comparing the results we observe that there is a significant difference in the data informed in the two databases.

  13. Cost effective nuclear commercial grade dedication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maletz, J.J.; Marston, M.J.

    1991-01-01

    This paper describes a new computerized database method to create/edit/view specification technical data sheets (mini-specifications) for procurement of spare parts for nuclear facility maintenance and to develop information that could support possible future facility life extension efforts. This method may reduce cost when compared with current manual methods. The use of standardized technical data sheets (mini-specifications) for items of the same category improves efficiency. This method can be used for a variety of tasks, including: Nuclear safety-related procurement; Non-safety related procurement; Commercial grade item procurement/dedication; Evaluation of replacement items. This program will assist the nuclear facility in upgrading its procurementmore » activities consistent with the recent NUMARC Procurement Initiative. Proper utilization of the program will assist the user in assuring that the procured items are correct for the applications, provide data to assist in detecting fraudulent materials, minimize human error in withdrawing database information, improve data retrievability, improve traceability, and reduce long-term procurement costs.« less

  14. WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’

    NASA Astrophysics Data System (ADS)

    Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.

    2009-12-01

    The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.

  15. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  16. RTECS database (on the internet). Online data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Registry of Toxic Effects of Chemical Substances (RTECS (trademark)) is a database of toxicological information compiled, maintained, and updated by the National Institute for Occupational Safety and Health. The program is mandated by the Occupational Safety and Health Act of 1970. The original edition, known as the `Toxic Substances List,` was published on June 28, 1971, and included toxicologic data for approximately 5,000 chemicals. Since that time, the list has continuously grown and been updated, and its name changed to the current title, `Registry of Toxic Effects of Chemical Substances.` RTECS (trademark) now contains over 133,000 chemicals as NIOSHmore » strives to fulfill the mandate to list `all known toxic substances...and the concentrations at which...toxicity is known to occur.` This database is now available for searching through the Gov. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  17. Genome databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less

  18. Corruption of genomic databases with anomalous sequence.

    PubMed

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-06-11

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%.

  19. Evaluation of scientific periodicals and the Brazilian production of nursing articles.

    PubMed

    Erdmann, Alacoque Lorenzini; Marziale, Maria Helena Palucci; Pedreira, Mavilde da Luz Gonçalves; Lana, Francisco Carlos Félix; Pagliuca, Lorita Marlena Freitag; Padilha, Maria Itayra; Fernandes, Josicelia Dumêt

    2009-01-01

    This study aimed to identify nursing journals edited in Brazil indexed in the main bibliographic databases in the areas of health and nursing. It also aimed to classify the production of nursing graduate programs in 2007 according to the QUALIS/CAPES criteria used to classify scientific periodicals that disseminate the intellectual production of graduate programs in Brazil. This exploratory study used data from reports and documents available from CAPES to map scientific production and from searching the main international and national indexing databases. The findings from this research can help students, professors and coordinators of graduate programs in several ways: to understand the criteria of classifying periodicals; to be aware of the current production of graduate programs in the area of nursing; and to provide information that authors can use to select periodicals in which to publish their articles.

  20. World commercial aircraft accidents: 1st edition, 1946--1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, C.Y.

    1992-02-01

    This report is a compilation of all accidents world-wide involving aircraft in commercial service which resulted in the loss of the airframe or one or more fatality, or both. This information has been gathered in order to present a complete inventory of commercial aircraft accidents. Events involving military action, sabotage, terrorist bombings, hijackings, suicides, and industrial ground accidents are included within this list. This report is organized into six chapters. The first chapter is the introduction. The second chapter contains the compilation of accidents involving world commercial jet aircraft from 1952 to 1991. The third chapter presents a compilation ofmore » accidents involving world commercial turboprop aircraft from 1952 to 1991. The fourth chapter presents a compilation of accidents involving world commercial pistonprop aircraft with four or more engines from 1946 to 1991. Each accident compilation or database in chapters two, three and four is presented in chronological order. Each accident is presented with information the following categories: date of accident, airline or operator and its flight number (if known), type of flight, type of aircraft and model, aircraft registration number, construction number/manufacturers serial number, aircraft damage resulting from accident, accident flight phase, accident location, number of fatalities, number of occupants, references used to compile the information, and finally cause, remarks, or description (brief) of the accident. The fifth chapter presents a list of all commercial aircraft accidents for all aircraft types with 100 or more fatalities in order of decreasing number of fatalities. Chapter six presents the commercial aircraft accidents for all aircraft types by flight phase. Future editions of this report will have additional follow-on chapters which will present other studies still in preparation at the time this edition was being prepared.« less

  1. Geographic Information Systems and Web Page Development

    NASA Technical Reports Server (NTRS)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet Explorer as the foundation. After some programming, it was possible to view AutoCAD files and other GIS-related applications on Internet Explorer, while providing the user with a variety of editing commands and setting options. I was also given the task of launching a divisional website using Macromedia Flash and other web- development programs.

  2. The GEISA 2009 Spectroscopic Database System and its CNES/CNRS Ether Products and Services Center Interactive Distribution

    NASA Astrophysics Data System (ADS)

    Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain; Boonne, Cathy; Poulet-Crovisier, Nathalie

    2010-05-01

    The GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France) a system comprising three independent sub-databases devoted respectively to : line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. The updated 2009 edition (GEISA-09) archives, in its line transition parameters sub-section, 50 molecules, corresponding to 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Detailed description of the whole database contents will be documented. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. These facilities will be described and widely illustrated, as well. Interactive demonstrations will be given if technical possibilities are feasible at the time of the Poster Display Session. More than 350 researchers are registered for on line use of GEISA on Ether. Currently, GEISA is involved in activities (2) related to the remote sensing of the terrestrial atmosphere thanks to the sounding performances of new generation of hyperspectral Earth' atmospheric sounders, like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. : THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT 109 (2008) 1043-1059. (2) Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. : The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT, 95 (2005) 429-467. (3) Scott, N.A. and A. Chedin. A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor., 20 (1981) 556-564.

  3. Content-based video retrieval by example video clip

    NASA Astrophysics Data System (ADS)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  4. Predatory Publishing Is a Threat to Non-Mainstream Science

    PubMed Central

    Nurmashev, Bekaidar

    2017-01-01

    This article highlights the issue of wasteful publishing practices that primarily affect non-mainstream science countries and rapidly growing academic disciplines. Numerous start-up open access publishers with soft or nonexistent quality checks and huge commercial interests have created a global crisis in the publishing market. Their publishing practices have been thoroughly examined, leading to the blacklisting of many journals by Jeffrey Beall. However, it appears that some subscription journals are also falling short of adhering to the international recommendations of global editorial associations. Unethical editing agencies that promote their services in non-mainstream science countries create more problems for inexperienced authors. It is suggested to regularly monitor the quality of already indexed journals and upgrade criteria of covering new sources by the Emerging Sources Citation Index (Web of Science), Scopus, and specialist bibliographic databases. Regional awareness campaigns to inform stakeholders of science communication about the importance of ethical writing, transparency of editing services, and permanent archiving can be also helpful for eradicating unethical publishing practices. PMID:28378542

  5. Predatory Publishing Is a Threat to Non-Mainstream Science.

    PubMed

    Gasparyan, Armen Yuri; Nurmashev, Bekaidar; Udovik, Elena E; Koroleva, Anna M; Kitas, George D

    2017-05-01

    This article highlights the issue of wasteful publishing practices that primarily affect non-mainstream science countries and rapidly growing academic disciplines. Numerous start-up open access publishers with soft or nonexistent quality checks and huge commercial interests have created a global crisis in the publishing market. Their publishing practices have been thoroughly examined, leading to the blacklisting of many journals by Jeffrey Beall. However, it appears that some subscription journals are also falling short of adhering to the international recommendations of global editorial associations. Unethical editing agencies that promote their services in non-mainstream science countries create more problems for inexperienced authors. It is suggested to regularly monitor the quality of already indexed journals and upgrade criteria of covering new sources by the Emerging Sources Citation Index (Web of Science), Scopus, and specialist bibliographic databases. Regional awareness campaigns to inform stakeholders of science communication about the importance of ethical writing, transparency of editing services, and permanent archiving can be also helpful for eradicating unethical publishing practices. © 2017 The Korean Academy of Medical Sciences.

  6. Performance Evaluation of a Database System in a Multiple Backend Configurations,

    DTIC Science & Technology

    1984-10-01

    leaving a systemn process , the * internal performance measuremnents of MMSD have been carried out. Mathodo lo.- gies for constructing test databases...access d i rectory data via the AT, EDIT, and CDT. In designing the test database, one of the key concepts is the choice of the directory attributes in...internal timing. These requests are selected since they retrieve the seIaI lest portion of the test database and the processing time for each request is

  7. International Journal of Occupational Medicine and Environmental Health in world documentation services: the SCOPUS based analysis of citation.

    PubMed

    Przyłuska, Jolanta

    2006-01-01

    A high classification of scientific journals in the ranking of international transfer of knowledge is reflected by other researchers' citations. The International Journal of Occupational Medicine and Environmental Health (IJOMEH) is an international professional quarterly focused on such areas as occupational medicine, toxicology and environmental health edited in Poland. IJOMEH, published in English, is indexed in numerous world information services (MEDLINE, EMBASE, EBSCO, SCOPUS). This paper presents the contribution of IJOMEH publications to the world circulation of scientific information based on the citation analysis. The analysis, grounded on the SCOPUS database, assessed the frequency of citations in the years 1996-2005. Journals in which they have been cited were retrieved and their list is also included.

  8. References that anyone can edit: review of Wikipedia citations in peer reviewed health science literature.

    PubMed

    Bould, M Dylan; Hladkowicz, Emily S; Pigford, Ashlee-Ann E; Ufholz, Lee-Anne; Postonogova, Tatyana; Shin, Eunkyung; Boet, Sylvain

    2014-03-06

    To examine indexed health science journals to evaluate the prevalence of Wikipedia citations, identify the journals that publish articles with Wikipedia citations, and determine how Wikipedia is being cited. Bibliometric analysis. Publications in the English language that included citations to Wikipedia were retrieved using the online databases Scopus and Web of Science. To identify health science journals, results were refined using Ulrich's database, selecting for citations from journals indexed in Medline, PubMed, or Embase. Using Thomson Reuters Journal Citation Reports, 2011 impact factors were collected for all journals included in the search. Resulting citations were thematically coded, and descriptive statistics were calculated. 1433 full text articles from 1008 journals indexed in Medline, PubMed, or Embase with 2049 Wikipedia citations were accessed. The frequency of Wikipedia citations has increased over time; most citations occurred after December 2010. More than half of the citations were coded as definitions (n = 648; 31.6%) or descriptions (n=482; 23.5%). Citations were not limited to journals with a low or no impact factor; the search found Wikipedia citations in many journals with high impact factors. Many publications are citing information from a tertiary source that can be edited by anyone, although permanent, evidence based sources are available. We encourage journal editors and reviewers to use caution when publishing articles that cite Wikipedia.

  9. The Status of Cognitive Psychology Journals: An Impact Factor Approach

    ERIC Educational Resources Information Center

    Togia, Aspasia

    2013-01-01

    The purpose of this study was to examine the impact factor of cognitive psychology journals indexed in the Science and Social Sciences edition of "Journal Citation Reports" ("JCR") database over a period of 10 consecutive years. Cognitive psychology journals were indexed in 11 different subject categories of the database. Their mean impact factor…

  10. Extending the Online Public Access Catalog into the Microcomputer Environment.

    ERIC Educational Resources Information Center

    Sutton, Brett

    1990-01-01

    Describes PCBIS, a database program for MS-DOS microcomputers that features a utility for automatically converting online public access catalog search results stored as text files into structured database files that can be searched, sorted, edited, and printed. Topics covered include the general features of the program, record structure, record…

  11. Using the Internet, Online Services, and CD-ROMs for Writing Research and Term Papers, Second Edition. Neal-Schuman NetGuide Series.

    ERIC Educational Resources Information Center

    Harmon, Charles, Ed.

    Like its predecessor, this second edition combines the best of two traditional generic texts: "how to write a term paper" and "how to use the library." Particularly helpful for high school and first year college students, this guide explains how to utilize online library catalogs, the most commonly available indexes and databases (in print,…

  12. Ontology for Vector Surveillance and Management

    PubMed Central

    LOZANO-FUENTES, SAUL; BANDYOPADHYAY, ARITRA; COWELL, LINDSAY G.; GOLDFAIN, ALBERT; EISEN, LARS

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an “umbrella” for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a “term tree” to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through the has_vector relation, of arthropod species to the pathogenic microorganisms for which they serve as biological vectors. We also recognized and addressed a potential roadblock for use of the VSMO by the vector-borne disease community: the difficulty in extracting information from OBO-Edit ontology files (*.obo files) and exporting the information to other file formats. A novel ontology explorer tool was developed to facilitate extraction and export of information from the VSMO *.obo file into lists of terms and their associated unique IDs in *.txt or *.csv file formats. These lists can then be imported into a database or data management system for use as select lists with predefined terms. This is an important step to ensure that the knowledge contained in our ontology can be put into practical use. PMID:23427646

  13. Ontology for vector surveillance and management.

    PubMed

    Lozano-Fuentes, Saul; Bandyopadhyay, Aritra; Cowell, Lindsay G; Goldfain, Albert; Eisen, Lars

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an "umbrella" for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a "term tree" to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through the has vector relation, of arthropod species to the pathogenic microorganisms for which they serve as biological vectors. We also recognized and addressed a potential roadblock for use of the VSMO by the vector-borne disease community: the difficulty in extracting information from OBO-Edit ontology files (*.obo files) and exporting the information to other file formats. A novel ontology explorer tool was developed to facilitate extraction and export of information from the VSMO*.obo file into lists of terms and their associated unique IDs in *.txt or *.csv file formats. These lists can then be imported into a database or data management system for use as select lists with predefined terms. This is an important step to ensure that the knowledge contained in our ontology can be put into practical use.

  14. [Information management in multicenter studies: the Brazilian longitudinal study for adult health].

    PubMed

    Duncan, Bruce Bartholow; Vigo, Álvaro; Hernandez, Émerson; Luft, Vivian Cristine; Ahlert, Hubert; Bergmann, Kaiser; Mota, Eduardo

    2013-06-01

    Information management in large multicenter studies requires a specialized approach. The Estudo Longitudinal da Saúde do Adulto (ELSA-Brasil - Brazilian Longitudinal Study for Adult Health) has created a Datacenter to enter and manage its data system. The aim of this paper is to describe the steps involved, including the information entry, transmission and management methods. A web system was developed in order to allow, in a safe and confidential way, online data entry, checking and editing, as well as the incorporation of data collected on paper. Additionally, a Picture Archiving and Communication System was implemented and customized for echocardiography and retinography. It stores the images received from the Investigation Centers and makes them available at the Reading Centers. Finally, data extraction and cleaning processes were developed to create databases in formats that enable analyses in multiple statistical packages.

  15. Using Crowdsourced Trajectories for Automated OSM Data Entry Approach

    PubMed Central

    Basiri, Anahid; Amirian, Pouria; Mooney, Peter

    2016-01-01

    The concept of crowdsourcing is nowadays extensively used to refer to the collection of data and the generation of information by large groups of users/contributors. OpenStreetMap (OSM) is a very successful example of a crowd-sourced geospatial data project. Unfortunately, it is often the case that OSM contributor inputs (including geometry and attribute data inserts, deletions and updates) have been found to be inaccurate, incomplete, inconsistent or vague. This is due to several reasons which include: (1) many contributors with little experience or training in mapping and Geographic Information Systems (GIS); (2) not enough contributors familiar with the areas being mapped; (3) contributors having different interpretations of the attributes (tags) for specific features; (4) different levels of enthusiasm between mappers resulting in different number of tags for similar features and (5) the user-friendliness of the online user-interface where the underlying map can be viewed and edited. This paper suggests an automatic mechanism, which uses raw spatial data (trajectories of movements contributed by contributors to OSM) to minimise the uncertainty and impact of the above-mentioned issues. This approach takes the raw trajectory datasets as input and analyses them using data mining techniques. In addition, we extract some patterns and rules about the geometry and attributes of the recognised features for the purpose of insertion or editing of features in the OSM database. The underlying idea is that certain characteristics of user trajectories are directly linked to the geometry and the attributes of geographic features. Using these rules successfully results in the generation of new features with higher spatial quality which are subsequently automatically inserted into the OSM database. PMID:27649192

  16. Reference and Information Services: An Introduction. Third Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Bopp, Richard E., Ed.; Smith, Linda C., Ed.

    Like the first two editions, this third edition is designed primarily to provide the beginning student of library and information science with an overview both of the concepts and processes behind today's reference services and of the most important sources consulted in answering common types of reference questions. The first 12 chapters deal with…

  17. Description of the process used to create 1992 Hanford Morality Study database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, E.S.; Buchanan, J.A.; Holter, N.A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL`s Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL`s Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL`s Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less

  18. Description of the process used to create 1992 Hanford Morality Study database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, E. S.; Buchanan, J. A.; Holter, N. A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL's Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL's Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL's Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less

  19. National launch strategy vehicle data management system

    NASA Technical Reports Server (NTRS)

    Cordes, David

    1990-01-01

    The national launch strategy vehicle data management system (NLS/VDMS) was developed as part of the 1990 NASA Summer Faculty Fellowship Program. The system was developed under the guidance of the Engineering Systems Branch of the Information Systems Office, and is intended for use within the Program Development Branch PD34. The NLS/VDMS is an on-line database system that permits the tracking of various launch vehicle configurations within the program development office. The system is designed to permit the definition of new launch vehicles, as well as the ability to display and edit existing launch vehicles. Vehicles can be grouped in logical architectures within the system. Reports generated from this package include vehicle data sheets, architecture data sheets, and vehicle flight rate reports. The topics covered include: (1) system overview; (2) initial system development; (3) supercard hypermedia authoring system; (4) the ORACLE database; and (5) system evaluation.

  20. VIOLIN: vaccine investigation and online information network.

    PubMed

    Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun

    2008-01-01

    Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.

  1. Narrowing the Gender Gap:Empowering Women through Literacy Programmes: Case Studies from the UNESCO Effective Literacy and Numeracy Practices Database (LitBase) http://www.unesco.org/uil/litbase/. 2nd Edition

    ERIC Educational Resources Information Center

    Hanemann, Ulrike, Ed.

    2015-01-01

    UIL has published a second edition of a collection of case studies of promising literacy programmes that seek to empower women. "Narrowing the Gender Gap: Empowering Women through Literacy Programmes" (originally published in 2013 as "Literacy Programmes with a Focus on Women to Reduce Gender Disparities") responds to the…

  2. CRISPR-Mediated Base Editing Enables Efficient Disruption of Eukaryotic Genes through Induction of STOP Codons.

    PubMed

    Billon, Pierre; Bryant, Eric E; Joseph, Sarah A; Nambiar, Tarun S; Hayward, Samuel B; Rothstein, Rodney; Ciccia, Alberto

    2017-09-21

    Standard CRISPR-mediated gene disruption strategies rely on Cas9-induced DNA double-strand breaks (DSBs). Here, we show that CRISPR-dependent base editing efficiently inactivates genes by precisely converting four codons (CAA, CAG, CGA, and TGG) into STOP codons without DSB formation. To facilitate gene inactivation by induction of STOP codons (iSTOP), we provide access to a database of over 3.4 million single guide RNAs (sgRNAs) for iSTOP (sgSTOPs) targeting 97%-99% of genes in eight eukaryotic species, and we describe a restriction fragment length polymorphism (RFLP) assay that allows the rapid detection of iSTOP-mediated editing in cell populations and clones. To simplify the selection of sgSTOPs, our resource includes annotations for off-target propensity, percentage of isoforms targeted, prediction of nonsense-mediated decay, and restriction enzymes for RFLP analysis. Additionally, our database includes sgSTOPs that could be employed to precisely model over 32,000 cancer-associated nonsense mutations. Altogether, this work provides a comprehensive resource for DSB-free gene disruption by iSTOP. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. HITRAN2016: Part I. Line lists for H_2O, CO_2, O_3, N_2O, CO, CH_4, and O_2

    NASA Astrophysics Data System (ADS)

    Gordon, Iouli E.; Rothman, Laurence S.; Tan, Yan; Kochanov, Roman V.; Hill, Christian

    2017-06-01

    The HITRAN2016 database is now officially released. Plethora of experimental and theoretical molecular spectroscopic data were collected, evaluated and vetted before compiling the new edition of the database. The database is now distributed through the dynamic user interface HITRANonline (available at www.hitran.org) which offers many flexible options for browsing and downloading the data. In addition HITRAN Application Programming Interface (HAPI) offers modern ways to download the HITRAN data and use it to carry out sophisticated calculations. The line-by-line lists for almost all of the 47 HITRAN molecules were updated in comparison with the previous compilation (HITRAN2012. Some of the most important updates for major atmospheric absorbers, such as H_2O, CO_2, O_3, N_2O, CO, CH_4, and O_2, will be presented in this talk, while the trace gases will be presented in the next talk by Y. Tan. The HITRAN2016 database now provides alternative line-shape representations for a number of molecules, as well as broadening by gases dominant in planetary atmospheres. In addition, substantial extension and improvement of cross-section data is featured, which will be described in a dedicated talk by R. V. Kochanov. The new edition of the database is a substantial step forward to improve retrievals of the planetary atmospheric constituents in comparison with previous editions, while offering new ways of working with the data. The HITRAN database is supported by the NASA AURA and PDART program grants NNX14AI55G and NNX16AG51G. I. E. Gordon, L. S. Rothman, C. Hill, R. V. Kochanov, Y. Tan, et al. The HITRAN2016 Molecular Spectroscopic Database. JQSRT 2017;submitted. Many spectroscopists and atmospheric scientists worldwide have contributed data to the database or provided invaluable validations. C. Hill, I. E. Gordon, R. V. Kochanov, L. Barrett, J.S. Wilzewski, L.S. Rothman, JQSRT. 177 (2016) 4-14 R.V. Kochanov, I. E. Gordon, L. S. Rothman, P. Wcislo, C. Hill, J. S. Wilzewski, JQSRT. 177 (2016) 15-30. L. S. Rothman, I. E. Gordon et al. The HITRAN2012 Molecular Spectroscopic Database. JQSRT, 113 (2013) 4-50.

  4. Introduction to Surgical Technology. Third Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Bushey, Vicki; Hildebrand, Bob; Hildebrand, Dinah; Johnson, Dave; Sikes, John; Tahah, Ann; Walker, Susan; Zielsdorf, Lani

    These teacher and student editions provide instructional materials for an introduction to surgical technology course. Introductory materials in the teacher edition include information on use, instructional/task analysis, academic and workplace skill classifications and definitions, related academic and workplace skill list, and crosswalk to…

  5. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  6. COSMIC: Software catalog 1991 edition diskette format

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The PC edition of the annual COSMIC Software contains descriptions of the over 1,200 computer programs available for use within the United States as of January 1, 1991. By using the PC version of the catalog, it is possible to conduct extensive searches of the software inventory for programs that meet specific criteria. Elements such as program keywords, hardware specifications, source code languages, and title acronyms can be used for the basis of such searches. After isolating those programs that might be of best interest to the user, it is then possible to either view at the monitor, or generate a hardcopy listing of all information on those packages. In addition to the program elements that the user can search on, information such as total program size, distribution media, and program price, as well as extensive abstracts on the program, are also available to the user at this time. Another useful feature of the catalog allows for the retention of programs that meet certain search criteria between individual sessions of using the catalog. This allows users to save the information on those programs that are of interest to them in different areas of application. They can then recall a specific collection of programs for information retrieval or further search reduction if desired. In addition, this version of the catalog is adaptable to a network/shared resource environment, allowing multiple users access to a single copy of the catalog database simultaneously.

  7. Comparison of Insertional RNA Editing in Myxomycetes

    PubMed Central

    Chen, Cai; Frankhouser, David; Bundschuh, Ralf

    2012-01-01

    RNA editing describes the process in which individual or short stretches of nucleotides in a messenger or structural RNA are inserted, deleted, or substituted. A high level of RNA editing has been observed in the mitochondrial genome of Physarum polycephalum. The most frequent editing type in Physarum is the insertion of individual Cs. RNA editing is extremely accurate in Physarum; however, little is known about its mechanism. Here, we demonstrate how analyzing two organisms from the Myxomycetes, namely Physarum polycephalum and Didymium iridis, allows us to test hypotheses about the editing mechanism that can not be tested from a single organism alone. First, we show that using the recently determined full transcriptome information of Physarum dramatically improves the accuracy of computational editing site prediction in Didymium. We use this approach to predict genes in the mitochondrial genome of Didymium and identify six new edited genes as well as one new gene that appears unedited. Next we investigate sequence conservation in the vicinity of editing sites between the two organisms in order to identify sites that harbor the information for the location of editing sites based on increased conservation. Our results imply that the information contained within only nine or ten nucleotides on either side of the editing site (a distance previously suggested through experiments) is not enough to locate the editing sites. Finally, we show that the codon position bias in C insertional RNA editing of these two organisms is correlated with the selection pressure on the respective genes thereby directly testing an evolutionary theory on the origin of this codon bias. Beyond revealing interesting properties of insertional RNA editing in Myxomycetes, our work suggests possible approaches to be used when finding sequence motifs for any biological process fails. PMID:22383871

  8. A Novel Computational Strategy to Identify A-to-I RNA Editing Sites by RNA-Seq Data: De Novo Detection in Human Spinal Cord Tissue

    PubMed Central

    Picardi, Ernesto; Gallo, Angela; Galeano, Federica; Tomaselli, Sara; Pesole, Graziano

    2012-01-01

    RNA editing is a post-transcriptional process occurring in a wide range of organisms. In human brain, the A-to-I RNA editing, in which individual adenosine (A) bases in pre-mRNA are modified to yield inosine (I), is the most frequent event. Modulating gene expression, RNA editing is essential for cellular homeostasis. Indeed, its deregulation has been linked to several neurological and neurodegenerative diseases. To date, many RNA editing sites have been identified by next generation sequencing technologies employing massive transcriptome sequencing together with whole genome or exome sequencing. While genome and transcriptome reads are not always available for single individuals, RNA-Seq data are widespread through public databases and represent a relevant source of yet unexplored RNA editing sites. In this context, we propose a simple computational strategy to identify genomic positions enriched in novel hypothetical RNA editing events by means of a new two-steps mapping procedure requiring only RNA-Seq data and no a priori knowledge of RNA editing characteristics and genomic reads. We assessed the suitability of our procedure by confirming A-to-I candidates using conventional Sanger sequencing and performing RNA-Seq as well as whole exome sequencing of human spinal cord tissue from a single individual. PMID:22957051

  9. Application of real-time cooperative editing in urban planning management system

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liu, Renyi; Liu, Nan; Bao, Weizheng

    2007-06-01

    With the increasing of business requirement of urban planning bureau, co-edit function is needed urgently, however conventional GIS are not support this. In order to overcome this limitation, a new kind urban 1planning management system with co-edit function is needed. Such a system called PM2006 has been used in Suzhou Urban Planning Bureau. PM2006 is introduced in this paper. In this paper, four main issues of Co-edit system--consistency, responsiveness time, data recoverability and unconstrained operation--were discussed. And for these four questions, resolutions were put forward in paper. To resolve these problems of co-edit GIS system, a data model called FGDB (File and ESRI GeoDatabase) that is mixture architecture of File and ESRI Geodatabase was introduced here. The main components of FGDB data model are ESRI versioned Geodatabase and replicated architecture. With FGDB, client responsiveness, spatial data recoverability and unconstrained operation were overcome. In last of paper, MapServer, the co-edit map server module, is presented. Main functions of MapServer are operation serialization and spatial data replication between file and versioned data.

  10. Residential and Light Commercial HVAC. Teacher Edition and Student Edition. Second Edition.

    ERIC Educational Resources Information Center

    Stephenson, David

    This package contains teacher and student editions of a residential and light commercial heating, ventilation, and air conditioning (HVAC) course of study. The teacher edition contains information on the following: using the publication; national competencies; competency profile; related academic and workplace skills list; tools, equipment, and…

  11. Applying Human ADAR1p110 and ADAR1p150 for Site-Directed RNA Editing-G/C Substitution Stabilizes GuideRNAs against Editing.

    PubMed

    Heep, Madeleine; Mach, Pia; Reautschnig, Philipp; Wettengel, Jacqueline; Stafforst, Thorsten

    2017-01-14

    Site-directed RNA editing is an approach to reprogram genetic information at the RNA level. We recently introduced a novel guideRNA that allows for the recruitment of human ADAR2 to manipulate genetic information. Here, we show that the current guideRNA design is already able to recruit another human deaminase, ADAR1, in both isoforms, p110 and p150. However, further optimization seems necessary as the current design is less efficient for ADAR1 isoforms. Furthermore, we describe hotspots at which the guideRNA itself is edited and show a way to circumvent this auto-editing without losing editing efficiency at the target. Both findings are important for the advancement of site-directed RNA editing as a tool in basic biology or as a platform for therapeutic editing.

  12. RNA editing: trypanosomes rewrite the genetic code.

    PubMed

    Stuart, K

    1998-01-01

    The understanding of how genetic information is stored and expressed has advanced considerably since the "central dogma" asserted that genetic information flows from the nucleotide sequence of DNA to that of messenger RNA (mRNA) which in turn specifies the amino acid sequence of a protein. It was found that genetic information can be stored as RNA (e.g. in RNA viruses) and can flow from RNA to DNA by reverse transcriptase enzyme activity. In addition, some genes contain introns, nucleotide sequences that are removed from their RNA (by RNA splicing) and thus are not represented in the resultant protein. Furthermore, alternative splicing was found to produce variant proteins from a single gene. More recently, the study of trypanosome parasites revealed an unexpected and indeed counter-intuitive genetic complexity. Genetic information for a single protein can be dispersed among several (DNA) genes in these organisms. One of these genes specifies an encrypted precursor mRNA that is converted to a functional mRNA by a process called RNA editing that inserts and deletes uridylate nucleotides. The sequence of the edited mRNA is specified by multiple small RNAs, named guide RNAs, (gRNAs) each of which is encoded in a separate gene. Thus, edited mRNA sequences are assembled from multiple genes by the transfer of information from one type of RNA to another. The existence of editing was surprising but has stimulated the discovery of other types of RNA editing. The Stuart laboratory has been exploring RNA editing in trypanosomes from the time of its discovery. They found dramatic differences between the mitochondrial gene sequences and those of the corresponding mRNAs, which indicated editing by the insertion and deletion of uridylates. Some editing was modest; simply eliminating shifts in sequence register of minimally extending the protein coding sequence. However, editing of many mRNAs was startingly extensive. The RNA sequence was essentially entirely remodeled with its sequence more the result of editing than the gene sequence. The identities of genes for such extensively edited RNA were not recognizable from the DNA sequence but they were readily identifiable from the edited mRNA sequence. Thus, despite the complex and extensive editing the resultant mRNA sequence is precise. Characterization of partially edited RNAs indicated that editing proceeds in the direction opposite to that used to specify the protein which reflects the use of the gRNAs. The numerous gRNAs that are used for editing are encoded in the DNA molecules whose role was previously a mystery. Using information gained in our earlier studies, the Stuart group developed an in vitro system that reproduces the fundamental process of editing in order to resolve the mechanism by which it occurs. They determined that editing entails a series of enzymatic steps rather than the mechanism used in RNA splicing. They also showed that chimeric gRNA-mRNA molecules are aberrant by-products of editing rather than intermediates in the process as had been proposed. Additional studies are exploring precisely how the number of added and deleted uridylates is specified by the gRNA. The Stuart laboratory showed that editing is performed by an aggregation of enzymes that catalyze the separate steps of editing. It also developed a method to purify this multimolecule complex that contains several, perhaps tens of, proteins. This will allow the study of its composition and the functions of its component parts. Indeed, the gene for one component has been identified and its detailed characterization begun. These studies are developing tools to explore related processes. An early finding in the lab was that the various mRNAs are differentially edited during the life cycle of the parasite. The pattern of this editing indicates that editing serves to regulate the alternation between two modes of energy generation. This regulation is coordinated with other events that are occurring during the life c

  13. Interdisciplinary Collaboration amongst Colleagues and between Initiatives with the Magnetics Information Consortium (MagIC) Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.

    2014-12-01

    Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.

  14. The Condition of K-12 Public Education in Maine: 2009

    ERIC Educational Resources Information Center

    Donis-Keller, Christine; Silvernail, David L.

    2009-01-01

    This twelfth edition of "The Condition of K-12 Public Education in Maine" is designed to provide Maine citizens, legislators, and educators a yearly report on the state of Maine public schools and education. This new edition updates educational information which appeared in earlier editions, and also provides information on several new…

  15. The Condition of K-12 Public Education in Maine: 2006

    ERIC Educational Resources Information Center

    Gravelle, Paula B.; Silvernail, David L.

    2006-01-01

    This tenth edition of "The Condition of K-12 Public Education in Maine" is designed to provide Maine citizens, legislators, and educators a yearly report on the state of Maine public schools and education. This new edition updates educational information which appeared in earlier editions, and also provides information on several new…

  16. Graphic Arts: Orientation, Composition, and Paste-Up. Fourth Edition. Teacher Edition [and] Student Edition.

    ERIC Educational Resources Information Center

    Licklider, Cheryl

    This teacher and student edition, the first in a series of instructional materials on graphic communication, consists of orientation information, teacher pages, and student worksheets. The teacher edition contains these introductory pages: use of this publication; training and competency profile; PrintED crosswalk; instructional/task analysis;…

  17. Solar Market Research and Analysis Publications | Solar Research | NREL

    Science.gov Websites

    lifespan, and saving costs. The report is an expanded edition of an interim report published in 2015. Cost achieving the SETO 2030 residential PV cost target of $0.05 /kWh by identifying and quantifying cost reduction opportunities. Distribution Grid Integration Unit Cost Database: This database contains unit cost

  18. Ocean Instruments Web Site for Undergraduate, Secondary and Informal Education

    NASA Astrophysics Data System (ADS)

    Farrington, J. W.; Nevala, A.; Dolby, L. A.

    2004-12-01

    An Ocean Instruments web site has been developed that makes available information about ocean sampling and measurement instruments and platforms. The site features text, pictures, diagrams and background information written or edited by experts in ocean science and engineering and contains links to glossaries and multimedia technologies including video streaming, audio packages, and searchable databases. The site was developed after advisory meetings with selected professors teaching undergraduate classes who responded to the question, what could Woods Hole Oceanographic Institution supply to enhance undergraduate education in ocean sciences, life sciences, and geosciences? Prototypes were developed and tested with students, potential users, and potential contributors. The site is hosted by WHOI. The initial five instruments featured were provided by four WHOI scientists and engineers and by one Sea Education Association faculty member. The site is now open to contributions from scientists and engineers worldwide. The site will not advertise or promote the use of individual ocean instruments.

  19. References that anyone can edit: review of Wikipedia citations in peer reviewed health science literature

    PubMed Central

    Hladkowicz, Emily S; Pigford, Ashlee-Ann E; Ufholz, Lee-Anne; Postonogova, Tatyana; Shin, Eunkyung; Boet, Sylvain

    2014-01-01

    Objectives To examine indexed health science journals to evaluate the prevalence of Wikipedia citations, identify the journals that publish articles with Wikipedia citations, and determine how Wikipedia is being cited. Design Bibliometric analysis. Study selection Publications in the English language that included citations to Wikipedia were retrieved using the online databases Scopus and Web of Science. Data sources To identify health science journals, results were refined using Ulrich’s database, selecting for citations from journals indexed in Medline, PubMed, or Embase. Using Thomson Reuters Journal Citation Reports, 2011 impact factors were collected for all journals included in the search. Data extraction Resulting citations were thematically coded, and descriptive statistics were calculated. Results 1433 full text articles from 1008 journals indexed in Medline, PubMed, or Embase with 2049 Wikipedia citations were accessed. The frequency of Wikipedia citations has increased over time; most citations occurred after December 2010. More than half of the citations were coded as definitions (n=648; 31.6%) or descriptions (n=482; 23.5%). Citations were not limited to journals with a low or no impact factor; the search found Wikipedia citations in many journals with high impact factors. Conclusions Many publications are citing information from a tertiary source that can be edited by anyone, although permanent, evidence based sources are available. We encourage journal editors and reviewers to use caution when publishing articles that cite Wikipedia. PMID:24603564

  20. [The Chilean Association of Biomedical Journal Editors].

    PubMed

    Reyes, H

    2001-01-01

    On September 29th, 2000, The Chilean Association of Biomedical Journal Editors was founded, sponsored by the "Comisión Nacional de Investigación Científica y Tecnológica (CONICYT)" (the Governmental Agency promoting and funding scientific research and technological development in Chile) and the "Sociedad Médica de Santiago" (Chilean Society of Internal Medicine). The Association adopted the goals of the World Association of Medical Editors (WAME) and therefore it will foster "cooperation and communication among Editors of Chilean biomedical journals; to improve editorial standards, to promote professionalism in medical editing through education, self-criticism and self-regulation; and to encourage research on the principles and practice of medical editing". Twenty nine journals covering a closely similar number of different biomedical sciences, medical specialties, veterinary, dentistry and nursing, became Founding Members of the Association. A Governing Board was elected: President: Humberto Reyes, M.D. (Editor, Revista Médica de Chile); Vice-President: Mariano del Sol, M.D. (Editor, Revista Chilena de Anatomía); Secretary: Anna María Prat (CONICYT); Councilors: Manuel Krauskopff, Ph.D. (Editor, Biological Research) and Maritza Rahal, M.D. (Editor, Revista de Otorrinolaringología y Cirugía de Cabeza y Cuello). The Association will organize a Symposium on Biomedical Journal Editing and will spread information stimulating Chilean biomedical journals to become indexed in international databases and in SciELO-Chile, the main Chilean scientific website (www.scielo.cl).

  1. Automated search of control points in surface-based morphometry.

    PubMed

    Canna, Antonietta; Russo, Andrea G; Ponticorvo, Sara; Manara, Renzo; Pepino, Alessandro; Sansone, Mario; Di Salle, Francesco; Esposito, Fabrizio

    2018-04-16

    Cortical surface-based morphometry is based on a semi-automated analysis of structural MRI images. In FreeSurfer, a widespread tool for surface-based analyses, a visual check of gray-white matter borders is followed by the manual placement of control points to drive the topological correction (editing) of segmented data. A novel algorithm combining radial sampling and machine learning is presented for the automated control point search (ACPS). Four data sets with 3 T MRI structural images were used for ACPS validation, including raw data acquired twice in 36 healthy subjects and both raw and FreeSurfer preprocessed data of 125 healthy subjects from public databases. The unedited data from a subgroup of subjects were submitted to manual control point search and editing. The ACPS algorithm was trained on manual control points and tested on new (unseen) unedited data. Cortical thickness (CT) and fractal dimensionality (FD) were estimated in three data sets by reconstructing surfaces from both unedited and edited data, and the effects of editing were compared between manual and automated editing and versus no editing. The ACPS-based editing improved the surface reconstructions similarly to manual editing. Compared to no editing, ACPS-based and manual editing significantly reduced CT and FD in consistent regions across different data sets. Despite the extra processing of control point driven reconstructions, CT and FD estimates were highly reproducible in almost all cortical regions, albeit some problematic regions (e.g. entorhinal cortex) may benefit from different editing. The use of control points improves the surface reconstruction and the ACPS algorithm can automate their search reducing the burden of manual editing. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. The Fleet Application for Scheduling and Tracking (FAST) Management Website

    NASA Technical Reports Server (NTRS)

    Marrero-Perez, Radames J.

    2014-01-01

    The FAST application was designed to replace the paper and pen method of checking out and checking in GSA Vehicles at KSC. By innovating from a paper and pen based checkout system to a fully digital one, not only the resources wasted by printing the checkout forms have been reduced, but it also reduces significantly the time that users and fleet managers need to interact with the system as well as improving the record accuracy for each vehicle. The vehicle information is pulled from a centralized database server in the SPSDL. In an attempt to add a new feature to the FAST application, the author of this report (alongside the FAST developers) has been designing and developing the FAST Management Website. The GSA fleet managers had to rely on the FAST developers in order to add new vehicles, edit vehicles and previous transactions, or for generating vehicles reports. By providing an easy-to-use FAST Management Website portal, the GSA fleet managers are now able to easily move vehicles, edit records, and print reports.

  3. Efficient algorithms for fast integration on large data sets from multiple sources.

    PubMed

    Mi, Tian; Rajasekaran, Sanguthevar; Aseltine, Robert

    2012-06-28

    Recent large scale deployments of health information technology have created opportunities for the integration of patient medical records with disparate public health, human service, and educational databases to provide comprehensive information related to health and development. Data integration techniques, which identify records belonging to the same individual that reside in multiple data sets, are essential to these efforts. Several algorithms have been proposed in the literatures that are adept in integrating records from two different datasets. Our algorithms are aimed at integrating multiple (in particular more than two) datasets efficiently. Hierarchical clustering based solutions are used to integrate multiple (in particular more than two) datasets. Edit distance is used as the basic distance calculation, while distance calculation of common input errors is also studied. Several techniques have been applied to improve the algorithms in terms of both time and space: 1) Partial Construction of the Dendrogram (PCD) that ignores the level above the threshold; 2) Ignoring the Dendrogram Structure (IDS); 3) Faster Computation of the Edit Distance (FCED) that predicts the distance with the threshold by upper bounds on edit distance; and 4) A pre-processing blocking phase that limits dynamic computation within each block. We have experimentally validated our algorithms on large simulated as well as real data. Accuracy and completeness are defined stringently to show the performance of our algorithms. In addition, we employ a four-category analysis. Comparison with FEBRL shows the robustness of our approach. In the experiments we conducted, the accuracy we observed exceeded 90% for the simulated data in most cases. 97.7% and 98.1% accuracy were achieved for the constant and proportional threshold, respectively, in a real dataset of 1,083,878 records.

  4. [Presence and characteristics of nursing terminology in Wikipedia].

    PubMed

    Sanz-Lorente, María; Guardiola-Wanden-Berghe, Rocío; Wanden-Berghe, Carmina; Sanz-Valero, Javier

    2013-10-01

    To determine the presence and consultations with nurse terminology in the Spanish edition of Wikipedia, and to analyze the differences with the English edition. We confirmed the existence of terminology via the Internet by the access to the Spanish and English editions of Wikipedia. We calculated the study sample (n = 386) from the 1840 nursery terms. 337 were found in the Spanish edition and 350 in the English. We found significant differences between the two editions (p < 0.001). Also differences were winched on to the number of references in terms (p < 0.001). However, there were not differences in the update/obsolescence of information, neither in the number of queries. The entries (articles) on nursing terminology in the Spanish edition of Wikipedia, has not yet reached an optimum level. Differences between Spanish and English editions of Wikipedia are more related to term existence than adequacy of information.

  5. POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.

    PubMed

    Dayringer, H E; Sammons, S A

    1991-04-01

    Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.

  6. The HITRAN2016 molecular spectroscopic database

    NASA Astrophysics Data System (ADS)

    Gordon, I. E.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.; Bernath, P. F.; Birk, M.; Boudon, V.; Campargue, A.; Chance, K. V.; Drouin, B. J.; Flaud, J.-M.; Gamache, R. R.; Hodges, J. T.; Jacquemart, D.; Perevalov, V. I.; Perrin, A.; Shine, K. P.; Smith, M.-A. H.; Tennyson, J.; Toon, G. C.; Tran, H.; Tyuterev, V. G.; Barbe, A.; Császár, A. G.; Devi, V. M.; Furtenbacher, T.; Harrison, J. J.; Hartmann, J.-M.; Jolly, A.; Johnson, T. J.; Karman, T.; Kleiner, I.; Kyuberis, A. A.; Loos, J.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S. N.; Moazzen-Ahmadi, N.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A. V.; Polyansky, O. L.; Rey, M.; Rotger, M.; Sharpe, S. W.; Sung, K.; Starikova, E.; Tashkun, S. A.; Auwera, J. Vander; Wagner, G.; Wilzewski, J.; Wcisło, P.; Yu, S.; Zak, E. J.

    2017-12-01

    This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. A powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.

  7. Getting Started with AppleWorks Data Base. First Edition.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This manual is a hands-on teaching tool for beginning users of the AppleWorks database software. It was developed to allow Apple IIGS users who are generally familiar with their machine and its peripherals to build a simple AppleWorks database file using version 2.0 or 2.1 of the program, and to store, print, and manipulate the file. The materials…

  8. Dynamic Terrin

    DTIC Science & Technology

    1991-12-30

    York, 1985. [ Serway 86]: Raymond Serway , Physics for Scientists and Engineers. 2nd Edition, Saunders College Publishing, Philadelphia, 1986. pp. 200... Physical Modeling System 3.4 Realtime Hydrology 3.5 Soil Dynamics and Kinematics 4. Database Issues 4.1 Goals 4.2 Object Oriented Databases 4.3 Distributed...Animation System F. Constraints and Physical Modeling G. The PM Physical Modeling System H. Realtime Hydrology I. A Simplified Model of Soil Slumping

  9. Genetic Variation in Cardiomyopathy and Cardiovascular Disorders.

    PubMed

    McNally, Elizabeth M; Puckelwartz, Megan J

    2015-01-01

    With the wider deployment of massively-parallel, next-generation sequencing, it is now possible to survey human genome data for research and clinical purposes. The reduced cost of producing short-read sequencing has now shifted the burden to data analysis. Analysis of genome sequencing remains challenged by the complexity of the human genome, including redundancy and the repetitive nature of genome elements and the large amount of variation in individual genomes. Public databases of human genome sequences greatly facilitate interpretation of common and rare genetic variation, although linking database sequence information to detailed clinical information is limited by privacy and practical issues. Genetic variation is a rich source of knowledge for cardiovascular disease because many, if not all, cardiovascular disorders are highly heritable. The role of rare genetic variation in predicting risk and complications of cardiovascular diseases has been well established for hypertrophic and dilated cardiomyopathy, where the number of genes that are linked to these disorders is growing. Bolstered by family data, where genetic variants segregate with disease, rare variation can be linked to specific genetic variation that offers profound diagnostic information. Understanding genetic variation in cardiomyopathy is likely to help stratify forms of heart failure and guide therapy. Ultimately, genetic variation may be amenable to gene correction and gene editing strategies.

  10. Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.

    PubMed

    Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin

    2017-05-15

    The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.

  11. YM500: a small RNA sequencing (smRNA-seq) database for microRNA research

    PubMed Central

    Cheng, Wei-Chung; Chung, I-Fang; Huang, Tse-Shun; Chang, Shih-Ting; Sun, Hsing-Jen; Tsai, Cheng-Fong; Liang, Muh-Lii; Wong, Tai-Tong; Wang, Hsei-Wei

    2013-01-01

    MicroRNAs (miRNAs) are small RNAs ∼22 nt in length that are involved in the regulation of a variety of physiological and pathological processes. Advances in high-throughput small RNA sequencing (smRNA-seq), one of the next-generation sequencing applications, have reshaped the miRNA research landscape. In this study, we established an integrative database, the YM500 (http://ngs.ym.edu.tw/ym500/), containing analysis pipelines and analysis results for 609 human and mice smRNA-seq results, including public data from the Gene Expression Omnibus (GEO) and some private sources. YM500 collects analysis results for miRNA quantification, for isomiR identification (incl. RNA editing), for arm switching discovery, and, more importantly, for novel miRNA predictions. Wetlab validation on >100 miRNAs confirmed high correlation between miRNA profiling and RT-qPCR results (R = 0.84). This database allows researchers to search these four different types of analysis results via our interactive web interface. YM500 allows researchers to define the criteria of isomiRs, and also integrates the information of dbSNP to help researchers distinguish isomiRs from SNPs. A user-friendly interface is provided to integrate miRNA-related information and existing evidence from hundreds of sequencing datasets. The identified novel miRNAs and isomiRs hold the potential for both basic research and biotech applications. PMID:23203880

  12. MIRO and IRbase: IT Tools for the Epidemiological Monitoring of Insecticide Resistance in Mosquito Disease Vectors

    PubMed Central

    Dialynas, Emmanuel; Topalis, Pantelis; Vontas, John; Louis, Christos

    2009-01-01

    Background Monitoring of insect vector populations with respect to their susceptibility to one or more insecticides is a crucial element of the strategies used for the control of arthropod-borne diseases. This management task can nowadays be achieved more efficiently when assisted by IT (Information Technology) tools, ranging from modern integrated databases to GIS (Geographic Information System). Here we describe an application ontology that we developed de novo, and a specially designed database that, based on this ontology, can be used for the purpose of controlling mosquitoes and, thus, the diseases that they transmit. Methodology/Principal Findings The ontology, named MIRO for Mosquito Insecticide Resistance Ontology, developed using the OBO-Edit software, describes all pertinent aspects of insecticide resistance, including specific methodology and mode of action. MIRO, then, forms the basis for the design and development of a dedicated database, IRbase, constructed using open source software, which can be used to retrieve data on mosquito populations in a temporally and spatially separate way, as well as to map the output using a Google Earth interface. The dependency of the database on the MIRO allows for a rational and efficient hierarchical search possibility. Conclusions/Significance The fact that the MIRO complies with the rules set forward by the OBO (Open Biomedical Ontologies) Foundry introduces cross-referencing with other biomedical ontologies and, thus, both MIRO and IRbase are suitable as parts of future comprehensive surveillance tools and decision support systems that will be used for the control of vector-borne diseases. MIRO is downloadable from and IRbase is accessible at VectorBase, the NIAID-sponsored open access database for arthropod vectors of disease. PMID:19547750

  13. Challenging a dogma; AJCC 8th staging system is not sufficient to predict outcomes of patients with malignant pleural mesothelioma.

    PubMed

    Abdel-Rahman, Omar

    2017-11-01

    The 8th edition of malignant pleural mesothelioma (MPM) American Joint Committee on Cancer (AJCC) staging system has been published. The current analysis aims to evaluate its performance in a population-based setting among patients recorded within the surveillance, epidemiology and end results (SEER) database. SEER database (2004-2013) has been accessed through SEER*Stat program and AJCC 8th edition stage groups were reconstructed. Survival analyses (overall and cancer-specific) were conducted according to 6th and 8th editions through Kaplan-Meier analysis. Cox-regression multivariate model was also utilized for pair wise comparisons between different prognostic groups for overall and cancer-specific survival. A total of 5382 patients with MPM were identified in the period from 2004 to 2013. According to the 6th edition, significant pair wise P values for overall survival included: IA vs. III (P=0.027); IA vs. IV: P<0.0001; IB vs. IV: P<0.0001; II vs. III: P<0.0001; II vs. IV: P<0.0001; III vs. IV: P<0.0001). According to the 8th edition, significant pair wise P values for overall survival included: all stages vs. IV: P<0.0001; IA vs. II: P=0.046; IA vs. IIIA: P=0.022; IA vs. IIIB: P <0.0001; IB vs. II: P<0.0001; IB vs. IIIB: P<0.0001; II vs. IIIA: P<0.0001; IIIA vs. IIIB: P<0.0001). C-index for 6th edition was 0.539 (SE: 0.008; 95% CI: 0.524-0.555); while C-index for 8th edition was 0.540 (SE: 0.008; 95% CI: 0.525-0.556). Based on the above findings, a simplified staging system was proposed and overall and cancer-specific survivals were evaluated according to the simplified system. For overall and cancer-specific survival assessment, P values for all pair wise comparisons among different stages were significant (<0.01). The prognostic performance of both the 6th and 8th AJCC editions is unsatisfactory; there is a need for a more practical and prognostically relevant staging system for MPM. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. RED: A Java-MySQL Software for Identifying and Visualizing RNA Editing Sites Using Rule-Based and Statistical Filters.

    PubMed

    Sun, Yongmei; Li, Xing; Wu, Di; Pan, Qi; Ji, Yuefeng; Ren, Hong; Ding, Keyue

    2016-01-01

    RNA editing is one of the post- or co-transcriptional processes that can lead to amino acid substitutions in protein sequences, alternative pre-mRNA splicing, and changes in gene expression levels. Although several methods have been suggested to identify RNA editing sites, there remains challenges to be addressed in distinguishing true RNA editing sites from its counterparts on genome and technical artifacts. In addition, there lacks a software framework to identify and visualize potential RNA editing sites. Here, we presented a software - 'RED' (RNA Editing sites Detector) - for the identification of RNA editing sites by integrating multiple rule-based and statistical filters. The potential RNA editing sites can be visualized at the genome and the site levels by graphical user interface (GUI). To improve performance, we used MySQL database management system (DBMS) for high-throughput data storage and query. We demonstrated the validity and utility of RED by identifying the presence and absence of C→U RNA-editing sites experimentally validated, in comparison with REDItools, a command line tool to perform high-throughput investigation of RNA editing. In an analysis of a sample data-set with 28 experimentally validated C→U RNA editing sites, RED had sensitivity and specificity of 0.64 and 0.5. In comparison, REDItools had a better sensitivity (0.75) but similar specificity (0.5). RED is an easy-to-use, platform-independent Java-based software, and can be applied to RNA-seq data without or with DNA sequencing data. The package is freely available under the GPLv3 license at http://github.com/REDetector/RED or https://sourceforge.net/projects/redetector.

  15. RED: A Java-MySQL Software for Identifying and Visualizing RNA Editing Sites Using Rule-Based and Statistical Filters

    PubMed Central

    Sun, Yongmei; Li, Xing; Wu, Di; Pan, Qi; Ji, Yuefeng; Ren, Hong; Ding, Keyue

    2016-01-01

    RNA editing is one of the post- or co-transcriptional processes that can lead to amino acid substitutions in protein sequences, alternative pre-mRNA splicing, and changes in gene expression levels. Although several methods have been suggested to identify RNA editing sites, there remains challenges to be addressed in distinguishing true RNA editing sites from its counterparts on genome and technical artifacts. In addition, there lacks a software framework to identify and visualize potential RNA editing sites. Here, we presented a software − ‘RED’ (RNA Editing sites Detector) − for the identification of RNA editing sites by integrating multiple rule-based and statistical filters. The potential RNA editing sites can be visualized at the genome and the site levels by graphical user interface (GUI). To improve performance, we used MySQL database management system (DBMS) for high-throughput data storage and query. We demonstrated the validity and utility of RED by identifying the presence and absence of C→U RNA-editing sites experimentally validated, in comparison with REDItools, a command line tool to perform high-throughput investigation of RNA editing. In an analysis of a sample data-set with 28 experimentally validated C→U RNA editing sites, RED had sensitivity and specificity of 0.64 and 0.5. In comparison, REDItools had a better sensitivity (0.75) but similar specificity (0.5). RED is an easy-to-use, platform-independent Java-based software, and can be applied to RNA-seq data without or with DNA sequencing data. The package is freely available under the GPLv3 license at http://github.com/REDetector/RED or https://sourceforge.net/projects/redetector. PMID:26930599

  16. Conjunctive programming: An interactive approach to software system synthesis

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1992-01-01

    This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.

  17. openBIS ELN-LIMS: an open-source database for academic laboratories.

    PubMed

    Barillari, Caterina; Ottoz, Diana S M; Fuentes-Serna, Juan Mariano; Ramakrishnan, Chandrasekhar; Rinn, Bernd; Rudolf, Fabian

    2016-02-15

    The open-source platform openBIS (open Biology Information System) offers an Electronic Laboratory Notebook and a Laboratory Information Management System (ELN-LIMS) solution suitable for the academic life science laboratories. openBIS ELN-LIMS allows researchers to efficiently document their work, to describe materials and methods and to collect raw and analyzed data. The system comes with a user-friendly web interface where data can be added, edited, browsed and searched. The openBIS software, a user guide and a demo instance are available at https://openbis-eln-lims.ethz.ch. The demo instance contains some data from our laboratory as an example to demonstrate the possibilities of the ELN-LIMS (Ottoz et al., 2014). For rapid local testing, a VirtualBox image of the ELN-LIMS is also available. © The Author 2015. Published by Oxford University Press.

  18. The Humanities: A Selective Guide to Information Sources. Fifth Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Blazek, Ron; Aversa, Elizabeth

    This book provides a guide to humanities information sources for teachers and students in schools of library and information science, reference librarians, collection development officers in libraries, humanities scholars, and others who have information needs in the broad discipline. This fifth edition represents a more comprehensive and updated…

  19. CRYSTMET—The NRCC Metals Crystallographic Data File

    PubMed Central

    Wood, Gordon H.; Rodgers, John R.; Gough, S. Roger; Villars, Pierre

    1996-01-01

    CRYSTMET is a computer-readable database of critically evaluated crystallographic data for metals (including alloys, intermetallics and minerals) accompanied by pertinent chemical, physical and bibliographic information. It currently contains about 60 000 entries and covers the literature exhaustively from 1913. Scientific editing of the abstracted entries, consisting of numerous automated and manual checks, is done to ensure consistency with related, previously published studies, to assign structure types where necessary and to help guarantee the accuracy of the data and related information. Analyses of the entries and their distribution across key journals as a function of time show interesting trends in the complexity of the compounds studied as well as in the elements they contain. Two applications of CRYSTMET are the identification of unknowns and the prediction of properties of materials. CRYSTMET is available either online or via license of a private copy from the Canadian Scientific Numeric Database Service (CAN/SND). The indexed online search and analysis system is easy and economical to use yet fast and powerful. Development of a new system is under way combining the capabilities of ORACLE with the flexibility of a modern interface based on the Netscape browsing tool. PMID:27805157

  20. Neutron Data Compilation Centre, European Nuclear Energy Agency, Newsletter No. 13

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1972-02-15

    This edition of the newsletter is intended to inform all users of neutron data about the content of the CCDN Experimental Neutron Data Library as of February 1972. It supercedes the last index issue, no. 11, published in October 1969. Since then, the database has been greatly enlarged thanks to the collaboration of neutron data users in the ENEA area (Western Europe plus Japan) and to the truly worldwide cooperation between the four existing data centers: NNCSC at Brookhaven Lab. in Upton, NY, United States, CCDN in Gif-sur_yvette, France, Centr po Jadernym Dannym in Obninsk, USSR, and the Nuclear Datamore » Section, IAEA, Vienna, Austria.« less

  1. 77 FR 72985 - Health Information Technology: Revisions to the 2014 Edition Electronic Health Record...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... Technology: Revisions to the 2014 Edition Electronic Health Record Certification Criteria; and Medicare and... National Coordinator for Health Information Technology (ONC) and Centers for Medicare & Medicaid Services... National Coordinator for Health Information Technology, Attention: Steven Posnack, Hubert H. Humphrey...

  2. Information Literacy: Essential Skills for the Information Age, Second Edition

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Lowe, Carrie A.; Spitzer, Kathleen L.

    2004-01-01

    This is the definitive work on information literacy. Michael Eisenberg, known worldwide as one of the originators of the innovative Big6 Information Problem Solving Process, and frequent presenters on the subject Carrie A. Lowe and Kathleen L. Spitzer have extensively revised and updated this long-awaited second edition. Tracing the history of…

  3. Biological Science: An Ecological Approach. BSCS Green Version. Teacher's Edition. Sixth Edition.

    ERIC Educational Resources Information Center

    Biological Sciences Curriculum Study, Colorado Springs.

    This book is the teacher's edition to the 1987 edition of the Biological Sciences Curriculum Study Green Version textbook. It contains directions for teaching with this version, a description of the accompanying materials, teaching strategies by chapters, lists of useful software, safety guidelines, a materials list, chemical safety information,…

  4. The development of the Project NetWork administrative records database for policy evaluation.

    PubMed

    Rupp, K; Driessen, D; Kornfeld, R; Wood, M

    1999-01-01

    This article describes the development of SSA's administrative records database for the Project NetWork return-to-work experiment targeting persons with disabilities. The article is part of a series of papers on the evaluation of the Project NetWork demonstration. In addition to 8,248 Project NetWork participants randomly assigned to receive case management services and a control group, the simulation identified 138,613 eligible nonparticipants in the demonstration areas. The output data files contain detailed monthly information on Supplemental Security Income (SSI) and Disability Insurance (DI) benefits, annual earnings, and a set of demographic and diagnostic variables. The data allow for the measurement of net outcomes and the analysis of factors affecting participation. The results suggest that it is feasible to simulate complex eligibility rules using administrative records, and create a clean and edited data file for a comprehensive and credible evaluation. The study shows that it is feasible to use administrative records data for selecting control or comparison groups in future demonstration evaluations.

  5. Quality of patient health information on the Internet: reviewing a complex and evolving landscape.

    PubMed

    Fahy, Eamonn; Hardikar, Rohan; Fox, Adrian; Mackay, Sean

    2014-01-01

    The popularity of the Internet has enabled unprecedented access to health information. As a largely unregulated source, there is potential for inconsistency in the quality of information that reaches the patient. To review the literature relating to the quality indicators of health information for patients on the Internet. A search of English language literature was conducted using PubMed, Google Scholar and EMBASE databases. Many articles have been published which assess the quality of information relating to specific medical conditions. Indicators of quality have been defined in an attempt to predict higher quality health information on the Internet. Quality evaluation tools are scoring systems based on indicators of quality. Established tools such as the HONcode may help patients navigate to more reliable information. Google and Wikipedia are important emerging sources of patient health information. The Internet is crucial for modern dissemination of health information, but it is clear that quality varies significantly between sources. Quality indicators for web-information have been developed but there is no agreed standard yet. We envisage that reliable rating tools, effective search engine ranking and progress in crowd-edited websites will enhance patient access to health information on the Internet.

  6. 15 CFR 995.4 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...

  7. 15 CFR 995.4 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...

  8. 15 CFR 995.4 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...

  9. 15 CFR 995.4 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE QUALITY ASSURANCE AND CERTIFICATION... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as... National Oceanic and Atmospheric Administration. NOAA ENC files comply with the IHO S-57 standard, Edition...

  10. Addendum to the Handbook of Accreditation. Second Edition.

    ERIC Educational Resources Information Center

    North Central Association of Colleges and Schools, Chicago, IL. Higher Learning Commission.

    This document supplements information provided in the "Handbook of Accreditation," Second Edition (Commission on Institutions of Higher Education). The Addendum contains the information necessary to keep readers informed of changes in policies and procedures while the Commission is engaged in an initiative to revise its Eligibility…

  11. Local Notice to Mariners

    DOT National Transportation Integrated Search

    1997-08-01

    The monthly edition has information concerning the waterways of the First Coast Guard District. Weekly suplemental editions containing only new information will be sent for the rest of the month. NOTE: Chart corrections and Light List changes are pub...

  12. A comprehensive overview of computational resources to aid in precision genome editing with engineered nucleases.

    PubMed

    Periwal, Vinita

    2017-07-01

    Genome editing with engineered nucleases (zinc finger nucleases, TAL effector nucleases s and Clustered regularly inter-spaced short palindromic repeats/CRISPR-associated) has recently been shown to have great promise in a variety of therapeutic and biotechnological applications. However, their exploitation in genetic analysis and clinical settings largely depends on their specificity for the intended genomic target. Large and complex genomes often contain highly homologous/repetitive sequences, which limits the specificity of genome editing tools and could result in off-target activity. Over the past few years, various computational approaches have been developed to assist the design process and predict/reduce the off-target activity of these nucleases. These tools could be efficiently used to guide the design of constructs for engineered nucleases and evaluate results after genome editing. This review provides a comprehensive overview of various databases, tools, web servers and resources for genome editing and compares their features and functionalities. Additionally, it also describes tools that have been developed to analyse post-genome editing results. The article also discusses important design parameters that could be considered while designing these nucleases. This review is intended to be a quick reference guide for experimentalists as well as computational biologists working in the field of genome editing with engineered nucleases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Introduction to United States Government Information Sources. Fifth Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Morehead, Joe

    This book seeks to provide an account of general and specialized sources, in print and non-print formats, that comprise the bibliographic and textual structure of federal government information. This particular edition endeavors to update and broaden discussion of government information available electronically via the Internet. Chapters are: (1)…

  14. The HITRAN2016 Molecular Spectroscopic Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, I. E.; Rothman, L. S.; Hill, C.

    This article describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. Finally, a powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.« less

  15. The HITRAN2016 Molecular Spectroscopic Database

    DOE PAGES

    Gordon, I. E.; Rothman, L. S.; Hill, C.; ...

    2017-07-05

    This article describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. Finally, a powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.« less

  16. Design and Implementation of a Three-Tiered Web-Based Inventory Ordering and Tracking System Prototype Using CORBA and Java

    DTIC Science & Technology

    2000-03-01

    languages yet still be able to access the legacy relational databases that businesses have huge investments in. JDBC is a low-level API designed for...consider the return of investment . The system requirements, discussed in Chapter II, are the main source of input to developing the relational...1996. Inprise, Gatekeeper Guide, Inprise Corporation, 1999. Kroenke, D., Database Processing Fundementals , Design, and Implementation, Sixth Edition

  17. [DNAStat, version 1.2 -- a software package for processing genetic profile databases and biostatistical calculations].

    PubMed

    Berent, Jarosław

    2007-01-01

    This paper presents the new DNAStat version 1.2 for processing genetic profile databases and biostatistical calculations. This new version contains, besides all the options of its predecessor 1.0, a calculation-results file export option in .xls format for Microsoft Office Excel, as well as the option of importing/exporting the population base of systems as .txt files for processing in Microsoft Notepad or EditPad

  18. The PartnerWeb Project: a component-based approach to enterprise-wide information integration and dissemination.

    PubMed Central

    Karson, T. H.; Perkins, C.; Dixon, C.; Ehresman, J. P.; Mammone, G. L.; Sato, L.; Schaffer, J. L.; Greenes, R. A.

    1997-01-01

    A component-based health information resource, delivered on an intranet and the Internet, utilizing World Wide Web (WWW) technology, has been built to meet the needs of a large integrated delivery network (IDN). Called PartnerWeb, this resource is intended to provide a variety of health care and reference information to both practitioners and consumers/patients. The initial target audience has been providers. Content management for the numerous departments, divisions, and other organizational entities within the IDN is accomplished by a distributed authoring and editing environment. Structured entry using a set of form tools into databases facilitates consistency of information presentation, while empowering designated authors and editors in the various entities to be responsible for their own materials, but not requiring them to be technically skilled. Each form tool manages an encapsulated component. The output of each component can be a dynamically generated display on WWW platforms, or an appropriate interface to other presentation environments. The PartnerWeb project lays the foundation for both an internal and external communication infrastructure for the enterprise that can facilitate information dissemination. PMID:9357648

  19. 2014 Edition Release 2 Electronic Health Record (EHR) certification criteria and the ONC HIT Certification Program; regulatory flexibilities, improvements, and enhanced health information exchange. Final rule.

    PubMed

    2014-09-11

    This final rule introduces regulatory flexibilities and general improvements for certification to the 2014 Edition EHR certification criteria (2014 Edition). It also codifies a few revisions and updates to the ONC HIT Certification Program for certification to the 2014 Edition and future editions of certification criteria as well as makes administrative updates to the Code of Federal Regulations.

  20. From BIM to GIS at the Smithsonian Institution

    NASA Astrophysics Data System (ADS)

    Günther-Diringer, Detlef

    2018-05-01

    BIM-files (Building Information Models) are in modern architecture and building management a basic prerequisite for successful creation of construction engineering projects. At the facilities department of the Smithsonian Institution more than six hundred buildings were maintained. All facilities were digital available in an ESRI ArcGIS-environment with connection to the database information about single rooms with the usage and further maintenance information. These data are organization wide available by an intranet viewer, but only in a two-dimensional representation. Goal of the carried out project was the development of a workflow from available BIM-models to the given GIS-structure. The test-environment were the BIM-models of the buildings of the Smithsonian museums along the Washington Mall. Based on new software editions of Autodesk Revit, FME and ArcGIS Pro the workflow from BIM to the GIS-data structure of the Smithsonian was successfully developed and may be applied for the setup of the future 3D intranet viewer.

  1. MetPetDB: A database for metamorphic geochemistry

    NASA Astrophysics Data System (ADS)

    Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather

    2009-12-01

    We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.

  2. 5S ribosomal RNA database Y2K

    PubMed Central

    Szymanski, Maciej; Barciszewska, Miroslawa Z.; Barciszewski, Jan; Erdmann, Volker A.

    2000-01-01

    This paper presents the updated version (Y2K) of the database of ribosomal 5S ribonucleic acids (5S rRNA) and their genes (5S rDNA), http://rose.man/poznan. pl/5SData/index.html . This edition of the database contains 1985 primary structures of 5S rRNA and 5S rDNA. They include 60 archaebacterial, 470 eubacterial, 63 plastid, nine mitochondrial and 1383 eukaryotic sequences. The nucleotide sequences of the 5S rRNAs or 5S rDNAs are divided according to the taxonomic position of the source organisms. PMID:10592212

  3. 5S ribosomal RNA database Y2K.

    PubMed

    Szymanski, M; Barciszewska, M Z; Barciszewski, J; Erdmann, V A

    2000-01-01

    This paper presents the updated version (Y2K) of the database of ribosomal 5S ribonucleic acids (5S rRNA) and their genes (5S rDNA), http://rose.man/poznan.pl/5SData/index.html. This edition of the database contains 1985primary structures of 5S rRNA and 5S rDNA. They include 60 archaebacterial, 470 eubacterial, 63 plastid, nine mitochondrial and 1383 eukaryotic sequences. The nucleotide sequences of the 5S rRNAs or 5S rDNAs are divided according to the taxonomic position of the source organisms.

  4. The presence and accuracy of food and nutrition terms in the Spanish and English editions of Wikipedia: in comparison with the Mini Larousse encyclopaedia.

    PubMed

    Cabrera-Hernández, Laura María; Wanden-Berghe, Carmina; Curbelo Castro, Celeste; Sanz-Valero, Javier

    2014-10-02

    To determine the presence and appropriateness of the terminology concerning Food/Nutrition Science in the Spanish and English editions of Wikipedia and to compare them with that of an encyclopaedia for general use (Mini Larousse). Méthods: The terms in the study were taken from the LID dictionary on metabolism and nutrition: The existence and appropriateness of the selected terms were checked through random sample estimate with no replacement (n=386), using the Spanish and English editions of Wikipedia. The existence of 261 terms in the Spanish edition and 306 in the English edition was determined from the study sample (n=386). Several differences were found between the two editions (p<0,001). There were differences between the two editions in relation to the appropriateness of definitions, though these were not studied in any depth (p<0,001). During the study of the 261 terms in the Spanish version of Wikipedia,3 entries (1,15%, IC95%: 0,00-2.44) were found to be lacking in appropriate information; 2 of the 306 entries in the English edition failed to give appropriate information (0,52%, IC95%: 0,00-1,23). A comparison between the existing entries of the Mini Larousse Encyclopaedia and the Spanish edition of Wikipedia, showed Wikipedia (p<0,001) as having a larger number of entries. The terminology under study is present to a lesser extent in the Spanish edition of Wikipedia than in the English edition. The appropriateness of content was greater in the English edition. Both the Spanish and English editions have a greater number of entries and more exact ones than the Mini Larousse. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  5. HITRAN2016 Database Part II: Overview of the Spectroscopic Parameters of the Trace Gases

    NASA Astrophysics Data System (ADS)

    Tan, Yan; Gordon, Iouli E.; Rothman, Laurence S.; Kochanov, Roman V.; Hill, Christian

    2017-06-01

    The 2016 edition of HITRAN database is available now. This new edition of the database takes advantage of the new structure and can be accessed through HITRANonline (www.hitran.org). The line-by-line lists for almost all of the trace atmospheric species were updated in comparison with the previous edition HITRAN2012. These extended update covers not only updating few transitions of the certain molecules, but also complete replacements of the whole line lists, and as well as introduction of new spectroscopic parameters for non-Voigt line shape. The new line lists for NH_3, HNO_3, OCS, HCN, CH_3Cl, C_2H_2, C_2H_6, PH_3, C_2H_4, CH_3CN, CF_4, C_4H_2, and SO_3 feature substantial expansion of the spectral and dynamic ranges in addition of the improved accuracy of the parameters for already existing lines. A semi-empirical procedure was developed to update the air-broadening and self-broadening coefficients of N_2O, SO_2, NH_3, CH_3Cl, H_2S, and HO_2. We draw particular attention to flaws in the commonly used expression n_{air}=0.79n_{N_2}+0.21n_{O_2} to determine the air-broadening temperature dependence exponent in the power law from those for nitrogen and oxygen broadening. A more meaningful approach will be presented. The semi-empirical line width, pressure shifts and temperature-dependence exponents of CO, NH_3, HF, HCl, OCS, C_2H_2, SO_2 perturbed by H_2, He, and CO_2 have been added to the database based on the algorithm described in Wilzewski et al.. The new spectroscopic parameters for HT profile were implemented into the database for hydrogen molecule. The HITRAN database is supported by the NASA AURA program grant NNX14AI55G and NASA PDART grant NNX16AG51G. I. E. Gordon, L. S. Rothman, et al., J Quant Spectrosc Radiat Transf 2017; submitted. Hill C, et al., J Quant Spectrosc Radiat Transf 2013;130:51-61. Wilzewski JS,et al., J Quant Spectrosc Radiat Transf 2016;168:193-206. Wcislo P, et al., J Quant Spectrosc Radiat Transf 2016;177:75-91.

  6. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  7. VisANT 3.0: new modules for pathway visualization, editing, prediction and construction.

    PubMed

    Hu, Zhenjun; Ng, David M; Yamada, Takuji; Chen, Chunnuan; Kawashima, Shuichi; Mellor, Joe; Linghu, Bolan; Kanehisa, Minoru; Stuart, Joshua M; DeLisi, Charles

    2007-07-01

    With the integration of the KEGG and Predictome databases as well as two search engines for coexpressed genes/proteins using data sets obtained from the Stanford Microarray Database (SMD) and Gene Expression Omnibus (GEO) database, VisANT 3.0 supports exploratory pathway analysis, which includes multi-scale visualization of multiple pathways, editing and annotating pathways using a KEGG compatible visual notation and visualization of expression data in the context of pathways. Expression levels are represented either by color intensity or by nodes with an embedded expression profile. Multiple experiments can be navigated or animated. Known KEGG pathways can be enriched by querying either coexpressed components of known pathway members or proteins with known physical interactions. Predicted pathways for genes/proteins with unknown functions can be inferred from coexpression or physical interaction data. Pathways produced in VisANT can be saved as computer-readable XML format (VisML), graphic images or high-resolution Scalable Vector Graphics (SVG). Pathways in the format of VisML can be securely shared within an interested group or published online using a simple Web link. VisANT is freely available at http://visant.bu.edu.

  8. Information for Families. Communique Special Edition, Summer 1998.

    ERIC Educational Resources Information Center

    Canter, Andrea, Ed.

    "Communique" is the "official newsletter of the National Association of School Psychologists" (NASP). This "Special Edition" of "Communique" is a compilation of previously published articles all of which bear on information frequently requested by families. NASP is dedicated to building partnerships between…

  9. First Season Catfish Farming. A Workbook for Beginning Pond and Cage Culture of Channel Catfish. Teacher Edition and Student Edition.

    ERIC Educational Resources Information Center

    Oklahoma State Board of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This workbook, comprised of both the teacher and student editions, presents guidelines useful for first-year catfish farmers in Oklahoma using pond or cage cultures to raise channel catfish. The teacher edition is a set of unit guidelines only. Contents include a list of suggested readings, important addresses with types of information available…

  10. Computerized system for the follow-up of patients with heart valve replacements.

    PubMed

    Bain, W H; Fyfe, I C; Rodger, R A

    1985-04-01

    A system is described which will accept, store, retrieve and analyze information on large numbers of patients who undergo valve replacement surgery. The purpose of the database is to yield readily available facts concerning the patient's clinical course, prosthetic valve function, length of survival, and incidence of complications. The system uses the Apple Macintosh computer, which is one of the current examples of small, desk-top microprocessors. The software for the input, editing and analysis programs has been written by a professional software writer in close collaboration with a cardiac surgeon. Its content is based on 8 years' experience of computer-based valve follow-up. The system is inexpensive and has proved easy to use in practice.

  11. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  12. Managing the Incompetent Teacher. Second Edition.

    ERIC Educational Resources Information Center

    Bridges, Edwin M.

    Featuring the same practical guidelines for ridding schools of incompetent teachers as the 1984 edition, this new edition incorporates substantially revised material on three topics: criteria and information sources for evaluating teaching effectiveness, remediation procedures, and grounds for dismissal. The book presents an eight-step systematic,…

  13. The Bowker Annual Library and Book Trade Almanac, 2002. 47th Edition.

    ERIC Educational Resources Information Center

    Bogart, Dave, Ed.

    This 47th edition is a compilation of practical information and informed analysis of interest to the library, information, and book trade worlds. The volume is divided into six parts. Part 1 includes six Special Reports, as well as reports on the year's activities from federal agencies, federal libraries, and national and international library and…

  14. Medical research in Israel and the Israel biomedical database.

    PubMed

    Berns, D S; Rager-Zisman, B

    2000-11-01

    The data collected for the second edition of the Directory of Medical Research in Israel and the Israel Biomedical Database have yielded very relevant information concerning the distribution of investigators, publication activities and funding sources. The aggregate data confirm the findings of the first edition published in 1996 [2]. Those facts endorse the highly concentrated and extensive nature of medical research in the Jerusalem area, which is conducted at the Hebrew University and its affiliated hospitals. In contrast, Tel Aviv University, whose basic research staff is about two-thirds the size of the Hebrew University staff, has a more diffuse relationship with its clinical staff who are located at more than half a dozen hospitals. Ben-Gurion University in Beer Sheva and the Technion in Haifa are smaller in size, but have closer geographic contact between their clinical and basic research staff. Nonetheless, all the medical schools and affiliated hospitals have good publication and funding records. It is important to note that while some aspects of the performance at basic research institutions seem to be somewhat better than at hospitals, the records are actually quite similar despite the greater burden of clinical services at the hospitals as compared to teaching responsibilities in the basic sciences. The survey also indicates the substantial number of young investigators in the latest survey who did not appear in the first survey. While this is certainly encouraging, it is also disturbing that the funding sources are apparently decreasing at a time when young investigators are attempting to become established and the increasing burden of health care costs precludes financial assistance from hospital sources. The intensity and undoubtedly the quality of medical research in Israel remains at a level consistent with many of the more advanced western countries. This conclusion is somewhat mitigated by the fact that there is a decrease in available funding and a measurable decrease in scholarly activity at a time when a new, younger generation of investigators is just beginning to become productive. In closing, we wish to stress that the collection of data for the Biomedical Database is a continuing project and we encourage all medical researches who may not have contributed relevant information to write to the Office of the Chief Scientist or contact the office by email.

  15. Selected Reference Books of 1993-1994.

    ERIC Educational Resources Information Center

    McIlvaine, Eileen

    1994-01-01

    Offers brief, critical reviews of recent scholarly and general works of interest to reference workers in university libraries. Titles covered include dictionaries, databases, religion, literature, music, dance, art and architecture, business, political science, social issues, and history. Brief descriptions of new editions and supplements for…

  16. Diesel Technology: Workplace Skills. Teacher Edition and Student Edition.

    ERIC Educational Resources Information Center

    Kellum, Mary

    This publication consists of instructional materials to provide secondary and postsecondary students with skills useful in pursuing a career in the diesel industry. Introductory materials in the teacher edition include information on use of the publication, competency profile, instructional/task analysis, related academic and workplace skills…

  17. Evaluating the Potential of Commercial GIS for Accelerator Configuration Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Larrieu; Y.R. Roblin; K. White

    2005-10-10

    The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less

  18. Education-Stratified Base-Rate Information on Discrepancy Scores Within and Between the Wechsler Adult Intelligence Scale-Third Edition and the Wechsler Memory Scale-Third Edition

    ERIC Educational Resources Information Center

    Dori, Galit A.; Chelune, Gordon J.

    2004-01-01

    The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997a) and the Wechsler Memory Scale--Third Edition (WMS-III; D. Wechsler, 1997b) are 2 of the most frequently used measures in psychology and neuropsychology. To facilitate the diagnostic use of these measures in the clinical decision-making process, this article…

  19. An efficient system for selectively altering genetic information within mRNAs

    PubMed Central

    Montiel-González, Maria Fernanda; Vallecillo-Viejo, Isabel C.; Rosenthal, Joshua J. C.

    2016-01-01

    Site-directed RNA editing (SDRE) is a strategy to precisely alter genetic information within mRNAs. By linking the catalytic domain of the RNA editing enzyme ADAR to an antisense guide RNA, specific adenosines can be converted to inosines, biological mimics for guanosine. Previously, we showed that a genetically encoded iteration of SDRE could target adenosines expressed in human cells, but not efficiently. Here we developed a reporter assay to quantify editing, and used it to improve our strategy. By enhancing the linkage between ADAR's catalytic domain and the guide RNA, and by introducing a mutation in the catalytic domain, the efficiency of converting a UAG premature termination codon (PTC) to tryptophan (UGG) was improved from ∼11 % to ∼70 %. Other PTCs were edited, but less efficiently. Numerous off-target edits were identified in the targeted mRNA, but not in randomly selected endogenous messages. Off-target edits could be eliminated by reducing the amount of guide RNA with a reduction in on-target editing. The catalytic rate of SDRE was compared with those for human ADARs on various substrates and found to be within an order of magnitude of most. These data underscore the promise of site-directed RNA editing as a therapeutic or experimental tool. PMID:27557710

  20. A proposal for cervical screening information systems in developing countries.

    PubMed

    Marrett, Loraine D; Robles, Sylvia; Ashbury, Fredrick D; Green, Bo; Goel, Vivek; Luciani, Silvana

    2002-11-20

    The effective and efficient delivery of cervical screening programs requires information for planning, management, delivery and evaluation. Specially designed systems are generally required to meet these needs. In many developing countries, lack of information systems constitutes an important barrier to development of comprehensive screening programs and the effective control of cervical cancer. Our report outlines a framework for creating such systems in developing countries and describes a conceptual model for a cervical screening information system. The proposed system is modular, recognizing that there will be considerable between-region heterogeneity in current status and priorities. The proposed system is centered on modules that would allow for the assembly and computerization of data on Pap tests, since these represent the main screening modality at the present time. Additional modules would process data and create and maintain a screening database (e.g., standardize, edit, link and update modules) and allow for the integration of other types of data, such as cervical histopathology results. An open systems development model is proposed, since it is most compatible with the goals of local stakeholder involvement and capacity-building. Copyright 2002 Wiley-Liss, Inc.

  1. Section 619 Profile, 19th Edition

    ERIC Educational Resources Information Center

    Lazara, A., Ed.; Danaher, J., Ed.; Kraus, R., Ed.; Goode, S., Ed.; Hipps, C., Ed.; Festa, C., Ed.

    2012-01-01

    This 2012 edition of this publication updates information provided by state coordinators on state policies, programs, and practices under the Preschool Grants Program (Section 619 of Part B) of the Individuals with Disabilities Education Act (IDEA). Information includes: (1) program administration; (2) funding; (3) interagency coordination; (4)…

  2. Handbook of Reference Sources. Third Edition.

    ERIC Educational Resources Information Center

    Nichols, Margaret Irby

    This third edition of popular and useful reference works, which emphasizes the needs of small libraries, contains 975 annotated entries and lists 201 additional titles (most with bibliographic and order information) in the annotations, representing an expansion of 30 percent over the second edition. The appendix lists 116 basic or core reference…

  3. Handbook of Research on Teaching the English Language Arts. Second Edition.

    ERIC Educational Resources Information Center

    Flood, James, Ed.; Lapp, Diane, Ed.; Squire, James R., Ed.; Jensen, Julie M., Ed.

    This updated second edition reflects developments in educational research and new information within the areas of language learning and instruction since the publication of the first edition in 1991. Its 75 essays assess the significance of research, evaluates new developments, and examines current conflicts, controversies, and issues, while…

  4. Genome-Independent Identification of RNA Editing by Mutual Information (GIREMI) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Identification of single-nucleotide variants in RNA-seq data. Current version focuses on detection of RNA editing sites without requiring genome sequence data. New version is under development to separately identify RNA editing sites and genetic variants using RNA-seq data alone.

  5. Web Thermo Tables (WTT) - Professional Edition

    National Institute of Standards and Technology Data Gateway

    SRD 203 NIST/TRC Web Thermo Tables (WTT) - Professional Edition (Online Subscription)   WTT - Professional Edition, a Web version of the TRC Thermodynamic Tables, represents a complete collection of critically evaluated thermodynamic property data primarily for pure organic compounds. As of Nov. 2011, WTT contains information on 23999 compounds.

  6. Introduction to United States Government Information Sources. Sixth Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Morehead, Joe

    This book provides an account of the general and specialized sources, in print and non-print formats, that make up the bibliographic and textual structure of federal government information. This edition has been revised to reflect the many changes that have occurred in the production and dissemination of government products within the last five…

  7. Sub-Saharan Africa Report

    DTIC Science & Technology

    1985-11-14

    official foreign reserves, and the general recognition in the market that there has been a continu- ous shortage of dollars. Whatever entered the forex ...employer by misusing his privileged access to the ’payroll system and editing the personnel database on payday to increase his monthly salary by

  8. 78 FR 1562 - Improving Government Regulations; Unified Agenda of Federal Regulatory and Deregulatory Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    ... statutory administration requirements as required. Starting with the fall 2007 edition, the Internet became... Agenda database. Because publication in the Federal Register is mandated for the regulatory flexibility.... Michael L. Rhodes, Director, Administration and Management. Defense Acquisition Regulations Council...

  9. ICD-11 and DSM-5 personality trait domains capture categorical personality disorders: Finding a common ground.

    PubMed

    Bach, Bo; Sellbom, Martin; Skjernov, Mathias; Simonsen, Erik

    2018-05-01

    The five personality disorder trait domains in the proposed International Classification of Diseases, 11th edition and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition are comparable in terms of Negative Affectivity, Detachment, Antagonism/Dissociality and Disinhibition. However, the International Classification of Diseases, 11th edition model includes a separate domain of Anankastia, whereas the Diagnostic and Statistical Manual of Mental Disorders, 5th edition model includes an additional domain of Psychoticism. This study examined associations of International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition trait domains, simultaneously, with categorical personality disorders. Psychiatric outpatients ( N = 226) were administered the Structured Clinical Interview for DSM-IV Axis II Personality Disorders Interview and the Personality Inventory for DSM-5. International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition trait domain scores were obtained using pertinent scoring algorithms for the Personality Inventory for DSM-5. Associations between categorical personality disorders and trait domains were examined using correlation and multiple regression analyses. Both the International Classification of Diseases, 11th edition and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition domain models showed relevant continuity with categorical personality disorders and captured a substantial amount of their information. As expected, the International Classification of Diseases, 11th edition model was superior in capturing obsessive-compulsive personality disorder, whereas the Diagnostic and Statistical Manual of Mental Disorders, 5th edition model was superior in capturing schizotypal personality disorder. These preliminary findings suggest that little information is 'lost' in a transition to trait domain models and potentially adds to narrowing the gap between Diagnostic and Statistical Manual of Mental Disorders, 5th edition and the proposed International Classification of Diseases, 11th edition model. Accordingly, the International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition domain models may be used to delineate one another as well as features of familiar categorical personality disorder types. A preliminary category-to-domain 'cross walk' is provided in the article.

  10. Efficiently Distributing Component-based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running

  11. Global Bathymetry: Machine Learning for Data Editing

    NASA Astrophysics Data System (ADS)

    Sandwell, D. T.; Tea, B.; Freund, Y.

    2017-12-01

    The accuracy of global bathymetry depends primarily on the coverage and accuracy of the sounding data and secondarily on the depth predicted from gravity. A main focus of our research is to add newly-available data to the global compilation. Most data sources have 1-12% of erroneous soundings caused by a wide array of blunders and measurement errors. Over the years we have hand-edited this data using undergraduate employees at UCSD (440 million soundings at 500 m resolution). We are developing a machine learning approach to refine the flagging of the older soundings and provide automated editing of newly-acquired soundings. The approach has three main steps: 1) Combine the sounding data with additional information that may inform the machine learning algorithm. The additional parameters include: depth predicted from gravity; distance to the nearest sounding from other cruises; seafloor age; spreading rate; sediment thickness; and vertical gravity gradient. 2) Use available edit decisions as training data sets for a boosted tree algorithm with a binary logistic objective function and L2 regularization. Initial results with poor quality single beam soundings show that the automated algorithm matches the hand-edited data 89% of the time. The results show that most of the information for detecting outliers comes from predicted depth with secondary contributions from distance to the nearest sounding and longitude. A similar analysis using very high quality multibeam data shows that the automated algorithm matches the hand-edited data 93% of the time. Again, most of the information for detecting outliers comes from predicted depth secondary contributions from distance to the nearest sounding and longitude. 3) The third step in the process is to use the machine learning parameters, derived from the training data, to edit 12 million newly acquired single beam sounding data provided by the National Geospatial-Intelligence Agency. The output of the learning algorithm will be confidence ratedindicating which edits the algorithm is confident on and which it is not confident. We expect the majority ( 90%) of edits to be confident and not require human intervention. Human intervention will be required only on the 10% unconfident decisions, thus reducing the amount of human work by a factor of 10 or more.

  12. RNA Editing and Its Molecular Mechanism in Plant Organelles

    PubMed Central

    Ichinose, Mizuho; Sugita, Mamoru

    2016-01-01

    RNA editing by cytidine (C) to uridine (U) conversions is widespread in plant mitochondria and chloroplasts. In some plant taxa, “reverse” U-to-C editing also occurs. However, to date, no instance of RNA editing has yet been reported in green algae and the complex thalloid liverworts. RNA editing may have evolved in early land plants 450 million years ago. However, in some plant species, including the liverwort, Marchantia polymorpha, editing may have been lost during evolution. Most RNA editing events can restore the evolutionarily conserved amino acid residues in mRNAs or create translation start and stop codons. Therefore, RNA editing is an essential process to maintain genetic information at the RNA level. Individual RNA editing sites are recognized by plant-specific pentatricopeptide repeat (PPR) proteins that are encoded in the nuclear genome. These PPR proteins are characterized by repeat elements that bind specifically to RNA sequences upstream of target editing sites. In flowering plants, non-PPR proteins also participate in multiple RNA editing events as auxiliary factors. C-to-U editing can be explained by cytidine deamination. The proteins discovered to date are important factors for RNA editing but a bona fide RNA editing enzyme has yet to be identified. PMID:28025543

  13. Compendium of fruit fly host information (CoFFHI), edition 3.0

    USDA-ARS?s Scientific Manuscript database

    The Compendium of Fruit Fly Host Information (CoFFHI), edition 3.0 (available at: https://coffhi.cphst.org/), developed through collaborative efforts of scientists in USDA-APHIS, USDA-ARS, and the Center for Integrated Pest Management (CIPM) of North Carolina State University (NCSU), provides centra...

  14. Compendium of fruit fly host information (CoFFHI), edition 2.0

    USDA-ARS?s Scientific Manuscript database

    The Compendium of Fruit Fly Host Information (CoFFHI), edition 2.0, developed through collaborative efforts of scientists in USDA-APHIS, USDA-ARS, and the Center for Integrated Pest Management (CIPM) of North Carolina State University (NCSU), provides centralized online documentation of what is know...

  15. The Educators' Handbook to Interactive Videodisc. Second Edition.

    ERIC Educational Resources Information Center

    Schwartz, Ed

    Designed to be a source of information for educators about interactive videodiscs, this handbook presents an overview of the technology and offers additional sources to be consulted for more detailed information. It is noted that, although this second edition of a 1985 publication has gone through extensive changes, clarifications, and…

  16. The Complete Learning Disabilities Directory. 2017 Edition

    ERIC Educational Resources Information Center

    Grey House Publishing, 2016

    2016-01-01

    Published for over a decade, this directory continues to be a successful, sought-after resource, providing valuable information to professionals, families, and individuals in the learning disabilities community. Supported by the National Center for Learning Disabilities, this 2017 edition brings together the most up-to-date information on LD…

  17. The Complete Learning Disabilities Directory. 2011 Edition

    ERIC Educational Resources Information Center

    Grey House Publishing, 2010

    2010-01-01

    Published for over a decade, this directory continues to be a successful, sought-after resource, providing valuable information to professionals, families, and individuals in the learning disabilities community. Supported by the National Center for Learning Disabilities, this 2011 edition brings together the most up-to-date information on LD…

  18. Psychometric Properties of Language Assessments for Children Aged 4–12 Years: A Systematic Review

    PubMed Central

    Denman, Deborah; Speyer, Renée; Munro, Natalie; Pearce, Wendy M.; Chen, Yu-Wei; Cordier, Reinie

    2017-01-01

    Introduction: Standardized assessments are widely used by speech pathologists in clinical and research settings to evaluate the language abilities of school-aged children and inform decisions about diagnosis, eligibility for services and intervention. Given the significance of these decisions, it is important that assessments have sound psychometric properties. Objective: The aim of this systematic review was to examine the psychometric quality of currently available comprehensive language assessments for school-aged children and identify assessments with the best evidence for use. Methods: Using the PRISMA framework as a guideline, a search of five databases and a review of websites and textbooks was undertaken to identify language assessments and published material on the reliability and validity of these assessments. The methodological quality of selected studies was evaluated using the COSMIN taxonomy and checklist. Results: Fifteen assessments were evaluated. For most assessments evidence of hypothesis testing (convergent and discriminant validity) was identified; with a smaller number of assessments having some evidence of reliability and content validity. No assessments presented with evidence of structural validity, internal consistency or error measurement. Overall, all assessments were identified as having limitations with regards to evidence of psychometric quality. Conclusions: Further research is required to provide good evidence of psychometric quality for currently available language assessments. Of the assessments evaluated, the Assessment of Literacy and Language, the Clinical Evaluation of Language Fundamentals-5th Edition, the Clinical Evaluation of Language Fundamentals-Preschool: 2nd Edition and the Preschool Language Scales-5th Edition presented with most evidence and are thus recommended for use. PMID:28936189

  19. Gene therapy clinical trials worldwide to 2017: An update.

    PubMed

    Ginn, Samantha L; Amaya, Anais K; Alexander, Ian E; Edelstein, Michael; Abedi, Mohammad R

    2018-03-25

    To date, almost 2600 gene therapy clinical trials have been completed, are ongoing or have been approved worldwide. Our database brings together global information on gene therapy clinical activity from trial databases, official agency sources, published literature, conference presentations and posters kindly provided to us by individual investigators or trial sponsors. This review presents our analysis of clinical trials that, to the best of our knowledge, have been or are being performed worldwide. As of our November 2017 update, we have entries on 2597 trials undertaken in 38 countries. We have analysed the geographical distribution of trials, the disease indications (or other reasons) for trials, the proportions to which different vector types are used, and the genes that have been transferred. Details of the analyses presented, and our searchable database are available via The Journal of Gene Medicine Gene Therapy Clinical Trials Worldwide website at: http://www.wiley.co.uk/genmed/clinical. We also provide an overview of the progress being made in gene therapy clinical trials around the world, and discuss key trends since the previous review, namely the use of chimeric antigen receptor T cells for the treatment of cancer and advancements in genome editing technologies, which have the potential to transform the field moving forward. Copyright © 2018 John Wiley & Sons, Ltd.

  20. [Revision of the TNM Stage Grouping in the Forthcoming Eighth Edition of the TNM Classification for Lung Cancer].

    PubMed

    Ye, Bo; Zhao, Heng

    2016-06-20

    The currently adopted staging system for lung cancer is the seventh edition of the TNM staging edited by Union for International Cancer Control (UICC) in January, 2009. In recent years, with the advances of techniques in lung cancer diagnosis and the treatment trends towards precision treatment modalities such as individualized therapy and molecular targeted therapy, the survival and prognosis of lung cancer has been significantly improved. The old staging standard is difficult to satisfy the currentrapidly developing clinical needs. Therefore, the International Lung Cancer Research Society (International Association for the Study of Lung Cancer, IASLC) updated the stage of lung cancer in 2015, and the forthcoming eighth edition of the TNM Classification for Lung Cancer, which will be formally adopted in Jan. 2017, has been published in Journal of Thoracic Oncology. The new staging system has adopted 35 databases from 16 countries, including 94,708 cases treated between 1999 and 2010. The advantages of the new staging lies in its higher prognosis prediction and clinical guidance value.

  1. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study.

    PubMed

    Léon, Grégory; Ouimet, Mathieu; Lavis, John N; Grimshaw, Jeremy; Gagnon, Marie-Pierre

    2013-03-21

    Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries' average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest Territories Department of Health and Social Services, Nunavut Department of Health and Social Services and Yukon Department of Health and Social Services). There is inequity in availability of peer-reviewed research in the fourteen Canadian health ministries. This inequity could present a problem, as each province and territory is responsible for formulating and implementing evidence-informed health policies and services for the benefit of its population.

  2. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study

    PubMed Central

    2013-01-01

    Background Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. Objectives To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. Methods From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. Results We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries’ average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest Territories Department of Health and Social Services, Nunavut Department of Health and Social Services and Yukon Department of Health and Social Services). Conclusions There is inequity in availability of peer-reviewed research in the fourteen Canadian health ministries. This inequity could present a problem, as each province and territory is responsible for formulating and implementing evidence-informed health policies and services for the benefit of its population. PMID:23514333

  3. Read Code Quality Assurance

    PubMed Central

    Schulz, Erich; Barrett, James W.; Price, Colin

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with “business rules” declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short. PMID:9670131

  4. Sprawl in European urban areas

    NASA Astrophysics Data System (ADS)

    Prastacos, Poulicos; Lagarias, Apostolos

    2016-08-01

    In this paper the 2006 edition of the Urban Atlas database is used to tabulate areas of low development density, usually referred to as "sprawl", for many European cities. The Urban Atlas database contains information on the land use distribution in the 305 largest European cities. Twenty different land use types are recognized, with six of them representing urban fabric. Urban fabric classes are residential areas differentiated by the density of development, which is measured by the sealing degree parameter that ranges from 0% to 100% (non-developed, fully developed). Analysis is performed on the distribution of the middle to low density areas defined as those with sealing degree less than 50%. Seven different country groups in which urban areas have similar sprawl characteristics are identified and some key characteristics of sprawl are discussed. Population of an urban area is another parameter considered in the analysis. Two spatial metrics, average patch size and mean distance to the nearest neighboring patch of the same class, are used to describe proximity/separation characteristics of sprawl in the urban areas of the seven groups.

  5. Read Code quality assurance: from simple syntax to semantic stability.

    PubMed

    Schulz, E B; Barrett, J W; Price, C

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with "business rules" declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short.

  6. Financing Education in a Climate of Change. Eighth Edition.

    ERIC Educational Resources Information Center

    Brimley, Vern, Jr.; Garfield, Rulon R.

    Since the publication of the seventh edition of this textbook in 1999, there have been many new developments in the education finance arena. Those changes are discussed in this eighth edition. Additional new material includes Internet resources, new exercises for further "laboratory" work, updated figures and tables, and fresh information on court…

  7. Minimum Qualifications for Faculty and Administrators in California Community Colleges. [Fifth Edition

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    This document is the fifth edition of Minimum Qualifications for Faculty and Administrators in California Community Colleges and it updates information presented in the last edition. The document is divided into the following sections: disciplines requiring a Mater's degree, disciplines in which a Master's degree is not generally expected for…

  8. The Handbook of Literacy Assessment and Evaluation. Second Edition.

    ERIC Educational Resources Information Center

    Harp, Bill

    This handbook gives teachers, reading specialists, administrators, or students concise, up-to-date information on the most popular assessment and evaluation tools in literacy. This second edition retains many of the tools reviewed in the first edition and adds 12 new tools. The first section reviews 24 tools that are teacher-made. The second…

  9. Data Input for Libraries: State-of-the-Art Report.

    ERIC Educational Resources Information Center

    Buckland, Lawrence F.

    This brief overview of new manuscript preparation methods which allow authors and editors to set their own type discusses the advantages and disadvantages of optical character recognition (OCR), microcomputers and personal computers, minicomputers, and word processors for editing and database entry. Potential library applications are also…

  10. University of Iowa at TREC 2008 Legal and Relevance Feedback Tracks

    DTIC Science & Technology

    2008-11-01

    Fellbaum, C, [ed.]. Wordnet: An Electronic Lexical Database. Cambridge : MIT Press, 1998. [3] Salton , G. (ed) (1971), The SMART Retrieval System...learning tools and techniques. 2nd Edition. San Francisco : Morgan Kaufmann, 2005. [5] Platt, J . Machines using Sequential Minimal Optimization. [ed.] B

  11. Far infrared supplement. Third edition: Catalog of infrared observations (lambda greater than or equal to 4.6 micrometers)

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Schmitz, Marion; Pitts, Patricia S.; Mead, Jaylee M.

    1993-01-01

    The Far Infrared Supplement contains a subset of the data in the full Catalog of Infrared Observations (all observations at wavelengths greater than 4.6 microns). The Catalog of Infrared Observations (CIO), NASA RP-1294, is a compilation of infrared astronomical observational data obtained from an extensive literature search of scientific journals and major astronomical catalogs and surveys. The literature search is complete for years 1965 through 1990 in this third edition. The catalog contains about 210,000 observations of roughly 20,000 individual sources, and supporting appendices. The expanded third edition contains coded IRAS 4-band data for all CIO sources detected by IRAS. The appendices include an atlas of infrared source positions (also included in this volume), two bibliographies of catalog listings, and an atlas of infrared spectral ranges. The complete CIO database is available to qualified users in printed, microfiche, and magnetic tape formats.

  12. Catalog of Infrared Observations, Third Edition

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Schmitz, Marion; Pitts, Patricia S.; Mead, Jaylee M.

    1993-01-01

    The Far Infrared Supplement contains a subset of the data in the full Catalog of Infrared Observations (all observations at wavelengths greater than 4.6 microns). The Catalog of Infrared Observations (CIO), NASA RP-1294, is a compilation of infrared astronomical observational data obtained from an extensive literature search of scientific journals and major astronomical catalogs and surveys. The literature search is complete for years 1965 through 1990 in this Third Edition. The Catalog contains about 210,000 observations of roughly 20,000 individual sources and supporting appendices. The expanded Third Edition contains coded IRAS 4-band data for all CIO sources detected by IRAS. The appendices include an atlas of infrared source positions (also included in this volume), two bibliographies of Catalog listings, and an atlas of infrared spectral ranges. The complete CIO database is available to qualified users in printed, microfiche, and magnetic-tape formats.

  13. Apollo: a community resource for genome annotation editing

    PubMed Central

    Ed, Lee; Nomi, Harris; Mark, Gibson; Raymond, Chetty; Suzanna, Lewis

    2009-01-01

    Summary: Apollo is a genome annotation-editing tool with an easy to use graphical interface. It is a component of the GMOD project, with ongoing development driven by the community. Recent additions to the software include support for the generic feature format version 3 (GFF3), continuous transcriptome data, a full Chado database interface, integration with remote services for on-the-fly BLAST and Primer BLAST analyses, graphical interfaces for configuring user preferences and full undo of all edit operations. Apollo's user community continues to grow, including its use as an educational tool for college and high-school students. Availability: Apollo is a Java application distributed under a free and open source license. Installers for Windows, Linux, Unix, Solaris and Mac OS X are available at http://apollo.berkeleybop.org, and the source code is available from the SourceForge CVS repository at http://gmod.cvs.sourceforge.net/gmod/apollo. Contact: elee@berkeleybop.org PMID:19439563

  14. Apollo: a community resource for genome annotation editing.

    PubMed

    Lee, Ed; Harris, Nomi; Gibson, Mark; Chetty, Raymond; Lewis, Suzanna

    2009-07-15

    Apollo is a genome annotation-editing tool with an easy to use graphical interface. It is a component of the GMOD project, with ongoing development driven by the community. Recent additions to the software include support for the generic feature format version 3 (GFF3), continuous transcriptome data, a full Chado database interface, integration with remote services for on-the-fly BLAST and Primer BLAST analyses, graphical interfaces for configuring user preferences and full undo of all edit operations. Apollo's user community continues to grow, including its use as an educational tool for college and high-school students. Apollo is a Java application distributed under a free and open source license. Installers for Windows, Linux, Unix, Solaris and Mac OS X are available at http://apollo.berkeleybop.org, and the source code is available from the SourceForge CVS repository at http://gmod.cvs.sourceforge.net/gmod/apollo.

  15. Data entry module and manuals for the Land Treatment Digital Library

    USGS Publications Warehouse

    Welty, Justin L.; Pilliod, David S.

    2013-01-01

    Across the country, public land managers make decisions each year that influence landscapes and ecosystems within their jurisdictions. Many of these decisions involve vegetation manipulations, which often are referred to as land treatments. These treatments include removal or alteration of plant biomass, seeding of burned areas, application of herbicides, and other activities. Data documenting these land treatments usually are stored at local management offices in various formats. Therefore, anyone interested in the types and effects of land treatments across multiple jurisdictions must first assemble the information, which can be difficult if data discovery and organization involve multiple local offices. A centralized system for storing and accessing the data helps inform land managers when making policy and management considerations and assists scientists in developing sampling designs and studies. The Land Treatment Digital Library (LTDL) was created by the U.S. Geological Survey (USGS) as a comprehensive database incorporating tabular data, documentation, photographs, and spatial data about land treatments in a single system. It was developed over a period of several years and refined based on feedback from partner agencies and stakeholders. Currently, Bureau of Land Management (BLM) land treatment data are being entered by USGS personnel as part of a memorandum of understanding between the USGS and BLM. The LTDL has a website maintained by the USGS Forest and Rangeland Ecosystem Science Center where LTDL data can be viewed http://ltdl.wr.usgs.gov/. The resources and information provided in this data series allow other agencies, organizations, and individuals to download an empty, stand-alone LTDL database to individual or networked computers. Data entered in these databases may be submitted to the USGS for possible inclusion in the online LTDL. Multiple computer programs are used to accomplish the objective of the LTDL. The support of an information-technology specialist or professionals familiar with Microsoft Access™, ESRI’s ArcGIS™, Python, Adobe Acrobat Professional™, and computer settings is essential when installing and operating the LTDL. After the program is operational, a critical element for successful data entry is an understanding of the difference between database tables and forms, and how to edit data in both formats. Complete instructions accompany the program, and they should be followed carefully to ensure the setup and operation of the database goes smoothly.

  16. LISTA, LISTA-HOP and LISTA-HON: a comprehensive compilation of protein encoding sequences and its associated homology databases from the yeast Saccharomyces.

    PubMed Central

    Dölz, R; Mossé, M O; Slonimski, P P; Bairoch, A; Linder, P

    1996-01-01

    We continued our effort to make a comprehensive database (LISTA) for the yeast Saccharomyces cerevisiae. As in previous editions the genetic names are consistently associated to each sequence with a known and confirmed ORF. If necessary, synonyms are given in the case of allelic duplicated sequences. Although the first publication of a sequence gives-according to our rules-the genetic name of a gene, in some instances more commonly used names are given to avoid nomenclature problems and the use of ancient designations which are no longer used. In these cases the old designation is given as synonym. Thus sequences can be found either by the name or by synonyms given in LISTA. Each entry contains the genetic name, the mnemonic from the EMBL data bank, the codon bias, reference of the publication of the sequence, Chromosomal location as far as known, SWISSPROT and EMBL accession numbers. New entries will also contain the name from the systematic sequencing efforts. Since the release of LISTA4.1 we update the database continuously. To obtain more information on the included sequences, each entry has been screened against non-redundant nucleotide and protein data bank collections resulting in LISTA-HON and LISTA-HOP. This release includes reports from full Smith and Watermann peptide-level searches against a non-redundant protein sequence database. The LISTA data base can be linked to the associated data sets or to nucleotide and protein banks by the Sequence Retrieval System (SRS). The database is available by FTP and on World Wide Web. PMID:8594599

  17. [Presence and adequacy of the nutritional and eating disorders terminology in the Spanish and English editions of the Wikipedia].

    PubMed

    Sanz-Valero, J; Guardiola-Wanden-Berghe, R; Castiel, L D

    2012-11-01

    To determine the presence and to assess the adequacy of the nutritional and eating disorders descriptors in the English and Spanish Wikipedia. The terms were obtained from the thesaurus: Medical Subject Headings (MeSH) and APA-Terms. The existence of the terms was confirmed accessing to the Spanish and English editions of Wikipedia via the Internet (http://es.wikipedia.org/). The last date for consultation and calculations was June 8, 2012. A total of 89 descriptors were identified, being 56 (62.92%) of them as terms in the Wikipedia: 42 (47.19%) in the Spanish edition and 56 (62.92%) in English. Significant differences between the two editions were assessed (chi-square = 9.41, df = 1, P <0.001). At the same time, differences between both editions according to the number of references in each term were observed (t-Student = -2,43; gl = 84,87; p = 0,017). However, there were not differences in the status of information being update/obsolete, neither in the number of queries. The entries related to nutritional and eating disorders terms have not yet reached an optimum level. Differences between english and spanish Wikipedia editions are more related to criteria of content principles (term existence) than adequacy of information. The English edition of Wikipedia has a more scientific endorsement, through the references cited, than the Spanish edition.

  18. Making Your News Service More Effective. Second Edition.

    ERIC Educational Resources Information Center

    Berger, Joel S., Ed.

    This handbook is intended as a reference tool for use in the college or university's public information office, news bureau, information services, or public relations office. An update of an earlier edition, it presents articles by a number of authors, some from CASE Currents, bulletin of the Council for Advancement and Support of Education. It…

  19. State Postsecondary Education Profiles Handbook. 1978 Edition. Report No. 88

    ERIC Educational Resources Information Center

    Education Commission of the States, Denver, CO.

    The third edition of the Profiles Handbook presents information about postsecondary education in the 50 states and the District of Columbia. Information about each state is organized into four main parts as follows: Part I contains a narrative description of the state-level coordinating or governing agency, institutional governing boards, master…

  20. Grants for Libraries & Information Services. 2012 Digital Edition

    ERIC Educational Resources Information Center

    Foundation Center, 2011

    2011-01-01

    This publication is only available as a downloadable file. See who's giving and getting grants in your field. Strengthen your search for funds with the Foundation Center's digital edition of "Grants for Libraries & Information Services." This new "Grant Guide" reveals the scope of current foundation giving in the field. You'll find descriptions of…

  1. Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing

    NASA Astrophysics Data System (ADS)

    Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel

    The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.

  2. Engineered Viruses as Genome Editing Devices.

    PubMed

    Chen, Xiaoyu; Gonçalves, Manuel A F V

    2016-03-01

    Genome editing based on sequence-specific designer nucleases, also known as programmable nucleases, seeks to modify in a targeted and precise manner the genetic information content of living cells. Delivering into cells designer nucleases alone or together with donor DNA templates, which serve as surrogate homologous recombination (HR) substrates, can result in gene knockouts or gene knock-ins, respectively. As engineered replication-defective viruses, viral vectors are having an increasingly important role as delivery vehicles for donor DNA templates and designer nucleases, namely, zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs) and clustered, regularly interspaced, short palindromic repeats (CRISPR)-associated Cas9 (CRISPR-Cas9) nucleases, also known as RNA-guided nucleases (RGNs). We review this dual role played by engineered viral particles on genome editing while focusing on their main scaffolds, consisting of lentiviruses, adeno-associated viruses, and adenoviruses. In addition, the coverage of the growing body of research on the repurposing of viral vectors as delivery systems for genome editing tools is complemented with information regarding their main characteristics, pros, and cons. Finally, this information is framed by a concise description of the chief principles, tools, and applications of the genome editing field as a whole.

  3. Engineered Viruses as Genome Editing Devices

    PubMed Central

    Chen, Xiaoyu; Gonçalves, Manuel A F V

    2016-01-01

    Genome editing based on sequence-specific designer nucleases, also known as programmable nucleases, seeks to modify in a targeted and precise manner the genetic information content of living cells. Delivering into cells designer nucleases alone or together with donor DNA templates, which serve as surrogate homologous recombination (HR) substrates, can result in gene knockouts or gene knock-ins, respectively. As engineered replication-defective viruses, viral vectors are having an increasingly important role as delivery vehicles for donor DNA templates and designer nucleases, namely, zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs) and clustered, regularly interspaced, short palindromic repeats (CRISPR)-associated Cas9 (CRISPR−Cas9) nucleases, also known as RNA-guided nucleases (RGNs). We review this dual role played by engineered viral particles on genome editing while focusing on their main scaffolds, consisting of lentiviruses, adeno-associated viruses, and adenoviruses. In addition, the coverage of the growing body of research on the repurposing of viral vectors as delivery systems for genome editing tools is complemented with information regarding their main characteristics, pros, and cons. Finally, this information is framed by a concise description of the chief principles, tools, and applications of the genome editing field as a whole. PMID:26336974

  4. New Directions in the NOAO Observing Proposal System

    NASA Astrophysics Data System (ADS)

    Gasson, David; Bell, Dave

    For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.

  5. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Keiji, Ogawa

    Toppan Printing Co., Ltd. has played a pioneering role in developing CTS (Computerized Typesetting System) for these twenty years, and has accumulated a great deal of technical know-how. The company intends to integrate accumulated information into multimedia. As for CD-ROM, it has been aggressively striven to develop, from planning to data-input and data-processing. Recently, under the guidance of Research group on molecular design, It has developed a CD-ROM system to support research and development in the field of organic chemistry. This system is constructed mainly of the data in “Organic Syntheses”, a bible among organic chemists. The outline of the structure of files, and that of indexes which is a key point in retrieval, the flow chart of the retrieval process, and editing processes, etc. are described in this paper.

  6. 2015 Edition Health Information Technology (Health IT) Certification Criteria, 2015 Edition Base Electronic Health Record (EHR) Definition, and ONC Health IT Certification Program Modifications. Final rule.

    PubMed

    2015-10-16

    This final rule finalizes a new edition of certification criteria (the 2015 Edition health IT certification criteria or "2015 Edition'') and a new 2015 Edition Base Electronic Health Record (EHR) definition, while also modifying the ONC Health IT Certification Program to make it open and accessible to more types of health IT and health IT that supports various care and practice settings. The 2015 Edition establishes the capabilities and specifies the related standards and implementation specifications that Certified Electronic Health Record Technology (CEHRT) would need to include to, at a minimum, support the achievement of meaningful use by eligible professionals (EPs), eligible hospitals, and critical access hospitals (CAHs) under the Medicare and Medicaid EHR Incentive Programs (EHR Incentive Programs) when such edition is required for use under these programs.

  7. EBMPracticeNet: A Bilingual National Electronic Point-Of-Care Project for Retrieval of Evidence-Based Clinical Guideline Information and Decision Support

    PubMed Central

    2013-01-01

    Background In Belgium, the construction of a national electronic point-of-care information service, EBMPracticeNet, was initiated in 2011 to optimize quality of care by promoting evidence-based decision-making. The collaboration of the government, health care providers, evidence-based medicine (EBM) partners, and vendors of electronic health records (EHR) is unique to this project. All Belgian health care professionals get free access to an up-to-date database of validated Belgian and nearly 1000 international guidelines, incorporated in a portal that also provides EBM information from other sources than guidelines, including computerized clinical decision support that is integrated in the EHRs. Objective The objective of this paper was to describe the development strategy, the overall content, and the management of EBMPracticeNet which may be of relevance to other health organizations creating national or regional electronic point-of-care information services. Methods Several candidate providers of comprehensive guideline solutions were evaluated and one database was selected. Translation of the guidelines to Dutch and French was done with translation software, post-editing by translators and medical proofreading. A strategy is determined to adapt the guideline content to the Belgian context. Acceptance of the computerized clinical decision support tool has been tested and a randomized controlled trial is planned to evaluate the effect on process and patient outcomes. Results Currently, EBMPracticeNet is in "work in progress" state. Reference is made to the results of a pilot study and to further planned research including a randomized controlled trial. Conclusions The collaboration of government, health care providers, EBM partners, and vendors of EHRs is unique. The potential value of the project is great. The link between all the EHRs from different vendors and a national database held on a single platform that is controlled by all EBM organizations in Belgium are the strengths of EBMPracticeNet. PMID:23842038

  8. C-to-U editing and site-directed RNA editing for the correction of genetic mutations.

    PubMed

    Vu, Luyen Thi; Tsukahara, Toshifumi

    2017-07-24

    Cytidine to uridine (C-to-U) editing is one type of substitutional RNA editing. It occurs in both mammals and plants. The molecular mechanism of C-to-U editing involves the hydrolytic deamination of a cytosine to a uracil base. C-to-U editing is mediated by RNA-specific cytidine deaminases and several complementation factors, which have not been completely identified. Here, we review recent findings related to the regulation and enzymatic basis of C-to-U RNA editing. More importantly, when C-to-U editing occurs in coding regions, it has the power to reprogram genetic information on the RNA level, therefore it has great potential for applications in transcript repair (diseases related to thymidine to cytidine (T>C) or adenosine to guanosine (A>G) point mutations). If it is possible to manipulate or mimic C-to-U editing, T>C or A>G genetic mutation-related diseases could be treated. Enzymatic and non-enzymatic site-directed RNA editing are two different approaches for mimicking C-to-U editing. For enzymatic site-directed RNA editing, C-to-U editing has not yet been successfully performed, and in theory, adenosine to inosine (A-to-I) editing involves the same strategy as C-to-U editing. Therefore, in this review, for applications in transcript repair, we will provide a detailed overview of enzymatic site-directed RNA editing, with a focus on A-to-I editing and non-enzymatic site-directed C-to-U editing.

  9. Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns

    PubMed Central

    2013-01-01

    Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810

  10. Techniques for Generating Objects in a Three-Dimensional CAD System.

    ERIC Educational Resources Information Center

    Goss, Larry D.

    1987-01-01

    Discusses coordinate systems, units of measure, scaling and levels as they relate to a database generated by a computer in a spatial rather than planer location. Describes geometric-oriented input, direct coordinates, transformations, annotation, editing and patterns. Stresses that hand drafting emulation is a short-sighted approach to…

  11. Re-examination of service-sire conception rates in the United States

    USDA-ARS?s Scientific Manuscript database

    Until recently sire conception rates (SCRs) in the United States had been published only for bulls from artificial-insemination (AI) organizations that paid dairy records processing centers a fee for editing the data and forwarding it to the national dairy database of the Council on Dairy Cattle Bre...

  12. A Practical Guide for Translators (Third Revised Edition). Topics in Translation 13.

    ERIC Educational Resources Information Center

    Samuelsson-Brown, Geoffrey

    This third edition of a guide for translators contains more information than the second edition and looks at translation as a business as well as an occupation, focusing on marketing and quality control. It is designed for people with little or no practical experience with translation in a commercial environment. The 16 chapters examine these…

  13. Introduction to Library Public Services. Sixth Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Evans, G. Edward; Amodeo, Anthony J.; Carter, Thomas L.

    This book covers the role, purpose, and philosophy related to each of the major functional areas of library public service. This sixth edition, on the presumption that most people know the basic facts about computer hardware, does not include the chapter (in the previous edition) on computer basics, and instead integrated specific technological…

  14. Digest of Education Statistics 2016, 52nd Edition. NCES 2017-094

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; de Brey, Cristobal; Dillow, Sally A.

    2018-01-01

    The 2016 edition of the "Digest of Education Statistics" is the 52nd in a series of publications initiated in 1962. The "Digest" has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field…

  15. Peer-Editing Practice in the Writing Classroom: Benefits and Drawbacks

    ERIC Educational Resources Information Center

    Deni, Ann Rosnida Md.; Zainal, Zainor Izat

    2011-01-01

    Small scale studies have shown that peer-editing is beneficial to students as it increases their awareness of the complex process of writing, it improves their knowledge of and skills in writing and helps them become more autonomous in learning. Teachers too may benefit from peer-editing as this practice discloses invaluable information on…

  16. Digest of Education Statistics 2015, 51st Edition. NCES 2016-014

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; de Brey, Cristobal; Dillow, Sally A.

    2016-01-01

    The 2015 edition of the "Digest of Education Statistics" is the 51st in a series of publications initiated in 1962. The "Digest" has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field…

  17. Designing Successful Transitions: A Guide for Orienting Students to College. 3rd Edition. The First-Year Experience Monograph Series No. 13

    ERIC Educational Resources Information Center

    Ward-Roof, Jeanine A., Ed.

    2010-01-01

    The 2010 edition of this monograph addresses many topics (e.g., administration of orientation programs, family involvement, student characteristics and needs, assessment, and orientation for specific student populations and institutional types) that were included in previous editions but approaches them with new information, updated data, and…

  18. Foundations of Psychological Testing: A Practical Approach. Second Edition

    ERIC Educational Resources Information Center

    McIntire, Sandra A.; Miller, Leslie A.

    2006-01-01

    The second edition of "Foundations of Psychological Testing: A Practical Approach" is a text for undergraduate students new to the field of psychological testing. Using a conversational format, the authors aim to prepare students to be informed consumers as test users or test takers. Features new to the second edition include: (1) New Content; (2)…

  19. Evaluation of the 8th AJCC staging system for pathologically versus clinically staged pancreatic adenocarcinoma: A time to revisit a dogma?

    PubMed

    Abdel-Rahman, Omar

    2018-02-01

    The 8th edition of the American Joint Committee on Cancer (AJCC) staging system for pancreatic exocrine adenocarcinoma has been released. The current study seeks to assess the 7th and 8th editions among patients registered within the surveillance, epidemiology and end results (SEER) database. SEER database (2010-2013) has been accessed through SEER*Stat program and AJCC 8th edition stages were reconstructed utilizing the collaborative stage descriptions. Kaplan-Meier analysis of overall survival and pancreatic cancer-specific survival analyses (according to both 7th and 8th editions and according to whether pathological or clinical staging were conducted) has been performed. Multivariate analysis of factors affecting pancreatic cancer-specific survival was also conducted through a Cox proportional hazard model. A total of 18  948 patients with pancreatic adenocarcinoma were identified in the period from 2010-2013. Pancreatic cancer-specific survival among pathologically staged patients and according to the 8th edition showed significant differences for all pair wise comparisons among different stages (P < 0.0001) except for the comparison between stage IA and stage IB (P = 0.307) and the comparison between stage IB and stage IIA (P = 0.116). Moreover, P value for stage IA vs IIA was 0.014; while pancreatic cancer-specific survival according to the 7th edition among pathologically staged patients showed significant differences for all pair wise comparisons among different stages (P < 0.0001) except for the comparison between IA and IB (P = 0.072), the comparison between stage IIA and stage IIB (P = 0.065), the comparison between stage IIA and stage III (P = 0.059) and the comparison between IIB and III (P = 0.595). Among clinically staged patients (i.e. those who did not undergo initial radical surgery), the prognostic performance of both 7th and 8th stages for both overall survival and pancreatic cancer-specific survival was limited. There is clearly a need to have two staging systems for pancreatic adenocarcinoma: pathological and clinical staging systems. Copyright © 2018 First Affiliated Hospital, Zhejiang University School of Medicine in China. Published by Elsevier B.V. All rights reserved.

  20. Pharmacology Portal: An Open Database for Clinical Pharmacologic Laboratory Services.

    PubMed

    Karlsen Bjånes, Tormod; Mjåset Hjertø, Espen; Lønne, Lars; Aronsen, Lena; Andsnes Berg, Jon; Bergan, Stein; Otto Berg-Hansen, Grim; Bernard, Jean-Paul; Larsen Burns, Margrete; Toralf Fosen, Jan; Frost, Joachim; Hilberg, Thor; Krabseth, Hege-Merete; Kvan, Elena; Narum, Sigrid; Austgulen Westin, Andreas

    2016-01-01

    More than 50 Norwegian public and private laboratories provide one or more analyses for therapeutic drug monitoring or testing for drugs of abuse. Practices differ among laboratories, and analytical repertoires can change rapidly as new substances become available for analysis. The Pharmacology Portal was developed to provide an overview of these activities and to standardize the practices and terminology among laboratories. The Pharmacology Portal is a modern dynamic web database comprising all available analyses within therapeutic drug monitoring and testing for drugs of abuse in Norway. Content can be retrieved by using the search engine or by scrolling through substance lists. The core content is a substance registry updated by a national editorial board of experts within the field of clinical pharmacology. This ensures quality and consistency regarding substance terminologies and classification. All laboratories publish their own repertoires in a user-friendly workflow, adding laboratory-specific details to the core information in the substance registry. The user management system ensures that laboratories are restricted from editing content in the database core or in repertoires within other laboratory subpages. The portal is for nonprofit use, and has been fully funded by the Norwegian Medical Association, the Norwegian Society of Clinical Pharmacology, and the 8 largest pharmacologic institutions in Norway. The database server runs an open-source content management system that ensures flexibility with respect to further development projects, including the potential expansion of the Pharmacology Portal to other countries. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  1. Tripal: a construction toolkit for online genome databases.

    PubMed

    Ficklin, Stephen P; Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E; Main, Doreen

    2011-01-01

    As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net.

  2. Tripal: a construction toolkit for online genome databases

    PubMed Central

    Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E.; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E.; Main, Doreen

    2011-01-01

    As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net PMID:21959868

  3. Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database

    NASA Astrophysics Data System (ADS)

    Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2015-12-01

    The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.

  4. The IASLC Lung Cancer Staging Project: A Renewed Call to Participation.

    PubMed

    Giroux, Dorothy J; Van Schil, Paul; Asamura, Hisao; Rami-Porta, Ramón; Chansky, Kari; Crowley, John J; Rusch, Valerie W; Kernstine, Kemp

    2018-06-01

    Over the past two decades, the International Association for the Study of Lung Cancer (IASLC) Staging Project has been a steady source of evidence-based recommendations for the TNM classification for lung cancer published by the Union for International Cancer Control and the American Joint Committee on Cancer. The Staging and Prognostic Factors Committee of the IASLC is now issuing a call for participation in the next phase of the project, which is designed to inform the ninth edition of the TNM classification for lung cancer. Following the case recruitment model for the eighth edition database, volunteer site participants are asked to submit data on patients whose lung cancer was diagnosed between January 1, 2011, and December 31, 2019, to the project by means of a secure, electronic data capture system provided by Cancer Research And Biostatistics in Seattle, Washington. Alternatively, participants may transfer existing data sets. The continued success of the IASLC Staging Project in achieving its objectives will depend on the extent of international participation, the degree to which cases are entered directly into the electronic data capture system, and how closely externally submitted cases conform to the data elements for the project. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  5. A National-Level Validation of the New American Joint Committee on Cancer 8th Edition Subclassification of Stage IIA and B Anal Squamous Cell Cancer.

    PubMed

    Goffredo, Paolo; Garancini, Mattia; Robinson, Timothy J; Frakes, Jessica; Hoshi, Hisakazu; Hassan, Imran

    2018-06-01

    The 8th edition of the American Joint Committee on Cancer (AJCC) updated the staging system of anal squamous cell cancer (ASCC) by subdividing stage II into A (T2N0M0) and B (T3N0M0) based on a secondary analysis of the RTOG 98-11 trial. We aimed to validate this new subclassification utilizing two nationally representative databases. The National Cancer Database (NCDB) [2004-2014] and the Surveillance, Epidemiology, and End Results (SEER) database [1988-2013] were queried to identify patients with stage II ASCC. A total of 6651 and 2579 stage IIA (2-5 cm) and 1777 and 641 stage IIB (> 5 cm) patients were identified in the NCDB and SEER databases, respectively. Compared with stage IIB patients, stage IIA patients within the NCDB were more often females with fewer comorbidities. No significant differences were observed between age, race, receipt of chemotherapy and radiation, and mean radiation dose. Demographic, clinical, and pathologic characteristics were comparable between patients in both datasets. The 5-year OS was 72% and 69% for stage IIA versus 57% and 50% for stage IIB in the NCDB and SEER databases, respectively (p < 0.001). After adjustment for available demographic and clinical confounders, stage IIB was significantly associated with worse survival in both cohorts (hazard ratio 1.58 and 2.01, both p < 0.001). This study validates the new AJCC subclassification of stage II anal cancer into A and B based on size (2-5 cm vs. > 5 cm) in the general ASCC population. AJCC stage IIB patients represent a higher risk category that should be targeted with more aggressive/novel therapies.

  6. The Guide to Simulations/Games For Education and Training. Second Edition.

    ERIC Educational Resources Information Center

    Zuckerman, David W.; Horn, Robert E.

    This guide gives complete information on 613 games and simulation games. In addition, there is a supplementary list of 473 more items which are in development, discontinued, or about which more information is needed. The book is edited for the potential game user, rather than the theoretician or creator. Games are organized by subject, including…

  7. Encountering the Chinese: A Guide for Americans. Second Edition. The InterAct Series.

    ERIC Educational Resources Information Center

    Wenzhong, Hu; Grove, Cornelius L.

    This book provides a practical and culturally sensitive guide to Chinese culture along with insights into how to communicate with and interact with Chinese people. This edition contains information on economic changes and the gradual demise of state owned companies in addition to information about basic Chinese values, cultural norms, and…

  8. Geologic Map of the Mount Trumbull 30' X 60' Quadrangle, Mohave and Coconino Counties, Northwestern Arizona

    USGS Publications Warehouse

    Billingsley, George H.; Wellmeyer, Jessica L.

    2003-01-01

    The geologic map of the Mount Trumbull 30' x 60' quadrangle is a cooperative product of the U.S. Geological Survey, the National Park Service, and the Bureau of Land Management that provides geologic map coverage and regional geologic information for visitor services and resource management of Grand Canyon National Park, Lake Mead Recreational Area, and Grand Canyon Parashant National Monument, Arizona. This map is a compilation of previous and new geologic mapping that encompasses the Mount Trumbull 30' x 60' quadrangle of Arizona. This digital database, a compilation of previous and new geologic mapping, contains geologic data used to produce the 100,000-scale Geologic Map of the Mount Trumbull 30' x 60' Quadrangle, Mohave and Coconino Counties, Northwestern Arizona. The geologic features that were mapped as part of this project include: geologic contacts and faults, bedrock and surficial geologic units, structural data, fold axes, karst features, mines, and volcanic features. This map was produced using 1:24,000-scale 1976 infrared aerial photographs followed by extensive field checking. Volcanic rocks were mapped as separate units when identified on aerial photographs as mappable and distinctly separate units associated with one or more pyroclastic cones and flows. Many of the Quaternary alluvial deposits that have similar lithology but different geomorphic characteristics were mapped almost entirely by photogeologic methods. Stratigraphic position and amount of erosional degradation were used to determine relative ages of alluvial deposits having similar lithologies. Each map unit and structure was investigated in detail in the field to ensure accuracy of description. Punch-registered mylar sheets were scanned at the Flagstaff Field Center using an Optronics 5040 raster scanner at a resolution of 50 microns (508 dpi). The scans were output in .rle format, converted to .rlc, and then converted to ARC/INFO grids. A tic file was created in geographic coordinates and projected into the base map projection (Polyconic) using a central meridian of -113.500. The tic file was used to transform the grid into Universal Transverse Mercator projection. The linework was vectorized using gridline. Scanned lines were edited interactively in ArcEdit. Polygons were attributed in ArcEdit and all artifacts and scanning errors visible at 1:100,000 were removed. Point data were digitized onscreen. Due to the discovery of digital and geologic errors on the original files, the ARC/INFO coverages were converted to a personal geodatabase and corrected in ArcMap. The feature classes which define the geologic units, lines and polygons, are topologically related and maintained in the geodatabase by a set of validation rules. The internal database structure and feature attributes were then modified to match other geologic map databases being created for the Grand Canyon region. Faults were edited with the downthrown block, if known, on the 'right side' of the line. The 'right' and 'left' sides of a line are determined from 'starting' at the line's 'from node' and moving to the line's end or 'to node'.

  9. Materials data handbook: Aluminum alloy 2014, 2nd edition

    NASA Technical Reports Server (NTRS)

    Muraca, R. F.; Whittick, J. S.

    1972-01-01

    A revised edition of the materials data handbook on the aluminum alloy 2014 is presented. The scope of the information presented includes physical and mechanical property data at cryogenic, ambient and elevated temperatures, supplemented with useful information in such areas as material procurement, metallurgy of the alloy, corrosion, environmental effects, fabrication and joining techniques. Design data are presented, as available, and these data are complemented with information on the typical behavior of the alloy.

  10. The 2003 edition of geisa: a spectroscopic database system for the second generation vertical sounders radiance simulation

    NASA Astrophysics Data System (ADS)

    Jacquinet-Husson, N.; Lmd Team

    The GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer accessible database system, in its former 1997 and 2001 versions, has been updated in 2003 (GEISA-03). It is developed by the ARA (Atmospheric Radiation Analysis) group at LMD (Laboratoire de Météorologie Dynamique, France) since 1974. This early effort implemented the so-called `` line-by-line and layer-by-layer '' approach for forward radiative transfer modelling action. The GEISA 2003 system comprises three databases with their associated management softwares: a database of spectroscopic parameters required to describe adequately the individual spectral lines belonging to 42 molecules (96 isotopic species) and located in a spectral range from the microwave to the limit of the visible. The featured molecules are of interest in studies of the terrestrial as well as the other planetary atmospheres, especially those of the Giant Planets. a database of absorption cross-sections of molecules such as chlorofluorocarbons which exhibit unresolvable spectra. a database of refractive indices of basic atmospheric aerosol components. Illustrations will be given of GEISA-03, data archiving method, contents, management softwares and Web access facilities at: http://ara.lmd.polytechnique.fr The performance of instruments like AIRS (Atmospheric Infrared Sounder; http://www-airs.jpl.nasa.gov) in the USA, and IASI (Infrared Atmospheric Sounding Interferometer; http://smsc.cnes.fr/IASI/index.htm) in Europe, which have a better vertical resolution and accuracy, compared to the presently existing satellite infrared vertical sounders, is directly related to the quality of the spectroscopic parameters of the optically active gases, since these are essential input in the forward models used to simulate recorded radiance spectra. For these upcoming atmospheric sounders, the so-called GEISA/IASI sub-database system has been elaborated, from GEISA. Its content, will be described, as well. This work is ongoing, with the purpose of assessing the IASI measurements capabilities and the spectroscopic information quality, within the ISSWG (IASI Sounding Science Working Group), in the frame of the CNES (Centre National d'Etudes Spatiales, France)/EUMETSAT (EUropean organization for the exploitation of METeorological SATellites) Polar System (EPS) project, by simulating high resolution radiances and/or using experimental data. EUMETSAT will implement GEISA/IASI into the EPS ground segment. The IASI soundings spectroscopic data archive requirements will be discussed in the context of comparisons between recorded and calculated experimental spectra, using the ARA/4A forward line-by-line radiative transfer modelling code in its latest version.

  11. ZINC: A Free Tool to Discover Chemistry for Biology

    PubMed Central

    2012-01-01

    ZINC is a free public resource for ligand discovery. The database contains over twenty million commercially available molecules in biologically relevant representations that may be downloaded in popular ready-to-dock formats and subsets. The Web site also enables searches by structure, biological activity, physical property, vendor, catalog number, name, and CAS number. Small custom subsets may be created, edited, shared, docked, downloaded, and conveyed to a vendor for purchase. The database is maintained and curated for a high purchasing success rate and is freely available at zinc.docking.org. PMID:22587354

  12. Management system for the SND experiments

    NASA Astrophysics Data System (ADS)

    Pugachev, K.; Korol, A.

    2017-09-01

    A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).

  13. Libraries for People with Handicaps: A Directory of Public Library Resources and Services in Ohio. Second edition.

    ERIC Educational Resources Information Center

    Ohio State Library, Columbus.

    This second edition of the directory contains information collected from 249 public libraries for use by the handicapped, i.e., blind, physically disabled, aged, shut-in, and institutionalized persons. Several changes in format have been made in response to users' reactions to the first edition. For quick reference to library services, materials…

  14. Face to Face: A Sourcebook of Individual Consultation Techniques for Faculty/Instructional Developers. New Revised Edition.

    ERIC Educational Resources Information Center

    Lewis, Karron G., Ed.; Lunde, Joyce T. Povlacs, Ed.

    Chapters in this edition contain many different approaches and strategies that can be used in individuals to improve teaching. The selections give the reader a variety of perspectives. The chapters from the first edition of this sourcebook have been updated or rewritten, and seven chapters have been added to provide additional information. The…

  15. Digest of Education Statistics 2014, 50th Edition. NCES 2016-006

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; de Brey, Cristobal; Dillow, Sally A.

    2016-01-01

    The 2014 edition of the "Digest of Education Statistics" is the 50th in a series of publications initiated in 1962. The Digest has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field of American…

  16. Systematic review of the effectiveness of training programs in writing for scholarly publication, journal editing, and manuscript peer review (protocol).

    PubMed

    Galipeau, James; Moher, David; Skidmore, Becky; Campbell, Craig; Hendry, Paul; Cameron, D William; Hébert, Paul C; Palepu, Anita

    2013-06-17

    An estimated $100 billion is lost to 'waste' in biomedical research globally, annually, much of which comes from the poor quality of published research. One area of waste involves bias in reporting research, which compromises the usability of published reports. In response, there has been an upsurge in interest and research in the scientific process of writing, editing, peer reviewing, and publishing (that is, journalology) of biomedical research. One reason for bias in reporting and the problem of unusable reports could be due to authors lacking knowledge or engaging in questionable practices while designing, conducting, or reporting their research. Another might be that the peer review process for journal publication has serious flaws, including possibly being ineffective, and having poorly trained and poorly motivated reviewers. Similarly, many journal editors have limited knowledge related to publication ethics. This can ultimately have a negative impact on the healthcare system. There have been repeated calls for better, more numerous training opportunities in writing for publication, peer review, and publishing. However, little research has taken stock of journalology training opportunities or evaluations of their effectiveness. We will conduct a systematic review to synthesize studies that evaluate the effectiveness of training programs in journalology. A comprehensive three-phase search approach will be employed to identify evaluations of training opportunities, involving: 1) forward-searching using the Scopus citation database, 2) a search of the MEDLINE In-Process and Non-Indexed Citations, MEDLINE, Embase, ERIC, and PsycINFO databases, as well as the databases of the Cochrane Library, and 3) a grey literature search. This project aims to provide evidence to help guide the journalological training of authors, peer reviewers, and editors. While there is ample evidence that many members of these groups are not getting the necessary training needed to excel at their respective journalology-related tasks, little is known about the characteristics of existing training opportunities, including their effectiveness. The proposed systematic review will provide evidence regarding the effectiveness of training, therefore giving potential trainees, course designers, and decision-makers evidence to help inform their choices and policies regarding the merits of specific training opportunities or types of training.

  17. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  18. Official Study Guide for the Certified Park and Recreation Professional. Fourth Edition

    ERIC Educational Resources Information Center

    Mulvaney, Michael A.; Hurd, Amy R.

    2013-01-01

    The "Official Study Guide for the CPRP Examination" provides up-to-date information in this new edition to assist the park and recreation professional in preparing for the CPRP examination. The study guide serves as an excellent source of information for any individual who works directly or indirectly in the field of park and recreation services.

  19. An International Training Program in Library and Information Science: Looking Backward and Forward

    ERIC Educational Resources Information Center

    Nieuwenhuysen, Paul

    2011-01-01

    The aim of this study is to improve the subsequent editions of an international training program in information management. Up to now 15 editions have been organized, coordinated by the author of this paper. Most participants work in developing countries, mainly in Africa and Asia. Each program takes place mainly in Brussels, Belgium, for about…

  20. Business, Government, and Law on the Internet. A Hands-On Second Edition. Workshop. Internet Workshop Series Number 3.

    ERIC Educational Resources Information Center

    Peete, Gary R.

    This "workshop-in-a-book" is a much-expanded second edition designed for the businessperson, legal researcher, information specialist, consumer, student, or scholar wanting to discover information in three overlapping areas: business, government, and law. The book is divided into two modules: (1) "The World Wide Web: Your Entree to…

  1. Effects of technical editing in biomedical journals: a systematic review.

    PubMed

    Wager, Elizabeth; Middleton, Philippa

    2002-06-05

    Technical editing supposedly improves the accuracy and clarity of journal articles. We examined evidence of its effects on research reports in biomedical journals. Subset of a systematic review using Cochrane methods, searching MEDLINE, EMBASE, and other databases from earliest entries to February 2000 by using inclusive search terms; hand searching relevant journals. We selected comparative studies of the effects of editorial processes on original research articles between acceptance and publication in biomedical journals. Two reviewers assessed each study and performed independent data extraction. The 11 studies on technical editing indicate that it improves the readability of articles slightly (as measured by Gunning Fog and Flesch reading ease scores), may improve other aspects of their quality, can increase the accuracy of references and quotations, and raises the quality of abstracts. Supplying authors with abstract preparation instructions had no discernible effect. Considering the time and resources devoted to technical editing, remarkably little is know about its effects or the effects of imposing different house styles. Studies performed at 3 journals employing relatively large numbers of professional technical editors suggest that their editorial processes are associated with increases in readability and quality of articles, but these findings may not be generalizable to other journals.

  2. Criminal Justice Research in Libraries and on the Internet.

    ERIC Educational Resources Information Center

    Nelson, Bonnie R.

    In addition to covering the enduring elements of traditional research on criminal justice, this new edition provides full coverage on research using the World Wide Web, hypertext documents, computer indexes, and other online resources. It gives an in-depth explanation of such concepts as databases, networks, and full text, and covers the Internet…

  3. Helping Students Succeed. Annual Report, 2010

    ERIC Educational Resources Information Center

    New Mexico Higher Education Department, 2010

    2010-01-01

    This annual report contains postsecondary data that has been collected and analyzed using the New Mexico Higher Education Department's Data Editing and Reporting (DEAR) database, unless otherwise noted. The purpose of the DEAR system is to increase the reliability in the data and to make more efficient efforts by institutions and the New Mexico…

  4. Working with Computers: Computer Orientation for Foreign Students.

    ERIC Educational Resources Information Center

    Barlow, Michael

    Designed as a resource for foreign students, this book includes instructions not only on how to use computers, but also on how to use them to complete academic work more efficiently. Part I introduces the basic operations of mainframes and microcomputers and the major areas of computing, i.e., file management, editing, communications, databases,…

  5. Changing genetic information through RNA editing

    NASA Technical Reports Server (NTRS)

    Maas, S.; Rich, A.

    2000-01-01

    RNA editing, the post-transcriptional alteration of a gene-encoded sequence, is a widespread phenomenon in eukaryotes. As a consequence of RNA editing, functionally distinct proteins can be produced from a single gene. The molecular mechanisms involved include single or multiple base insertions or deletions as well as base substitutions. In mammals, one type of substitutional RNA editing, characterized by site-specific base-modification, was shown to modulate important physiological processes. The underlying reaction mechanism of substitutional RNA editing involves hydrolytic deamination of cytosine or adenosine bases to uracil or inosine, respectively. Protein factors have been characterized that are able to induce RNA editing in vitro. A supergene family of RNA-dependent deaminases has emerged with the recent addition of adenosine deaminases specific for tRNA. Here we review the developments that have substantially increased our understanding of base-modification RNA editing over the past few years, with an emphasis on mechanistic differences, evolutionary aspects and the first insights into the regulation of editing activity.

  6. Advanced Hepatocellular Carcinoma: Which Staging Systems Best Predict Prognosis?

    PubMed Central

    Huitzil-Melendez, Fidel-David; Capanu, Marinela; O'Reilly, Eileen M.; Duffy, Austin; Gansukh, Bolorsukh; Saltz, Leonard L.; Abou-Alfa, Ghassan K.

    2010-01-01

    Purpose The purpose of cancer staging systems is to accurately predict patient prognosis. The outcome of advanced hepatocellular carcinoma (HCC) depends on both the cancer stage and the extent of liver dysfunction. Many staging systems that include both aspects have been developed. It remains unknown, however, which of these systems is optimal for predicting patient survival. Patients and Methods Patients with advanced HCC treated over a 5-year period at Memorial Sloan-Kettering Cancer Center were identified from an electronic medical record database. Patients with sufficient data for utilization in all staging systems were included. TNM sixth edition, Okuda, Barcelona Clinic Liver Cancer (BCLC), Cancer of the Liver Italian Program (CLIP), Chinese University Prognostic Index (CUPI), Japan Integrated Staging (JIS), and Groupe d'Etude et de Traitement du Carcinome Hepatocellulaire (GETCH) systems were ranked on the basis of their accuracy at predicting survival by using concordance index (c-index). Other independent prognostic variables were also identified. Results Overall, 187 eligible patients were identified and were staged by using the seven staging systems. CLIP, CUPI, and GETCH were the three top-ranking staging systems. BCLC and TNM sixth edition lacked any meaningful prognostic discrimination. Performance status, AST, abdominal pain, and esophageal varices improved the discriminatory ability of CLIP. Conclusion In our selected patient population, CLIP, CUPI, and GETCH were the most informative staging systems in predicting survival in patients with advanced HCC. Prospective validation is required to determine if they can be accurately used to stratify patients in clinical trials and to direct the appropriate need for systemic therapy versus best supportive care. BCLC and TNM sixth edition were not helpful in predicting survival outcome, and their use is not supported by our data. PMID:20458042

  7. Book review: Birds of Prey: Health & Disease, Third Edition

    USGS Publications Warehouse

    Olsen, Glenn H.

    2009-01-01

    Even though this book is billed as the third edition it is, in the words of Patrick T. Redig, author of its Foreword, ‘‘a seriously reinvented book.’’ Originally published in 1978 under the title of Veterinary Aspects of Captive Birds of Prey, this new edition, with its new title, could stand alone and not have been tagged with the ‘‘third edition.’’ Much has changed in the world of avian medicine in the 30 yr since the publishing of the original tome, and this new volume brings the latest information on raptor medicine to the reader.Review info: Birds of Prey: Health & Disease, Third Edition. Edited by John E. Cooper. Blackwell Sciences, Ltd., Oxford, UK. 2002. 345 pp. ISBN 978-0-63205-115-1.

  8. FDDI information management system for centralizing interactive, computerized multimedia clinical experiences in pediatric rheumatology/Immunology.

    PubMed

    Rouhani, R; Cronenberger, H; Stein, L; Hannum, W; Reed, A M; Wilhelm, C; Hsiao, H

    1995-01-01

    This paper describes the design, authoring, and development of interactive, computerized, multimedia clinical simulations in pediatric rheumatology/immunology and related musculoskeletal diseases, the development and implementation of a high speed information management system for their centralized storage and distribution, and analytical methods for evaluating the total system's educational impact on medical students and pediatric residents. An FDDI fiber optic network with client/server/host architecture is the core. The server houses digitized audio, still-image video clips and text files. A host station houses the DB2/2 database containing case-associated labels and information. Cases can be accessed from any workstation via a customized interface in AVA/2 written specifically for this application. OS/2 Presentation Manager controls, written in C, are incorporated into the interface. This interface allows SQL searches and retrievals of cases and case materials. In addition to providing user-directed clinical experiences, this centralized information management system provides designated faculty with the ability to add audio notes and visual pointers to image files. Users may browse through case materials, mark selected ones and download them for utilization in lectures or for editing and converting into 35mm slides.

  9. Effects of Contributor Experience on the Quality of Health-Related Wikipedia Articles

    PubMed Central

    Fetahu, Besnik; Kimmerle, Joachim

    2018-01-01

    Background Consulting the Internet for health-related information is a common and widespread phenomenon, and Wikipedia is arguably one of the most important resources for health-related information. Therefore, it is relevant to identify factors that have an impact on the quality of health-related Wikipedia articles. Objective In our study we have hypothesized a positive effect of contributor experience on the quality of health-related Wikipedia articles. Methods We mined the edit history of all (as of February 2017) 18,805 articles that were listed in the categories on the portal health & fitness in the English language version of Wikipedia. We identified tags within the articles’ edit histories, which indicated potential issues with regard to the respective article’s quality or neutrality. Of all of the sampled articles, 99 (99/18,805, 0.53%) articles had at some point received at least one such tag. In our analysis we only considered those articles with a minimum of 10 edits (10,265 articles in total; 96 tagged articles, 0.94%). Additionally, to test our hypothesis, we constructed contributor profiles, where a profile consisted of all the articles edited by a contributor and the corresponding number of edits contributed. We did not differentiate between rollbacks and edits with novel content. Results Nonparametric Mann-Whitney U-tests indicated a higher number of previously edited articles for editors of the nontagged articles (mean rank tagged 2348.23, mean rank nontagged 5159.29; U=9.25, P<.001). However, we did not find a significant difference for the contributors’ total number of edits (mean rank tagged 4872.85, mean rank nontagged 5135.48; U=0.87, P=.39). Using logistic regression analysis with the respective article’s number of edits and number of editors as covariates, only the number of edited articles yielded a significant effect on the article’s status as tagged versus nontagged (dummy-coded; Nagelkerke R2 for the full model=.17; B [SE B]=-0.001 [0.00]; Wald c2 [1]=19.70; P<.001), whereas we again found no significant effect for the mere number of edits (Nagelkerke R2 for the full model=.15; B [SE B]=0.000 [0.01]; Wald c2 [1]=0.01; P=.94). Conclusions Our findings indicate an effect of contributor experience on the quality of health-related Wikipedia articles. However, only the number of previously edited articles was a predictor of the articles’ quality but not the mere volume of edits. More research is needed to disentangle the different aspects of contributor experience. We have discussed the implications of our findings with respect to ensuring the quality of health-related information in collaborative knowledge-building platforms. PMID:29748161

  10. The Really Useful Book of Learning & Earning for Young Adult Carers. Third Edition

    ERIC Educational Resources Information Center

    Learning and Work Institute, 2016

    2016-01-01

    "The Really Useful Book of Learning and Earning for Young Adult Carers" is aimed at young adults (aged 16-25) in England who are looking after somebody else. The first edition of the book was printed in 2011. This third edition is full of new and up-to-date useful information about looking after your health and wellbeing, job hunting,…

  11. DNA and RNA editing of retrotransposons accelerate mammalian genome evolution.

    PubMed

    Knisbacher, Binyamin A; Levanon, Erez Y

    2015-04-01

    Genome evolution is commonly viewed as a gradual process that is driven by random mutations that accumulate over time. However, DNA- and RNA-editing enzymes have been identified that can accelerate evolution by actively modifying the genomically encoded information. The apolipoprotein B mRNA editing enzymes, catalytic polypeptide-like (APOBECs) are potent restriction factors that can inhibit retroelements by cytosine-to-uridine editing of retroelement DNA after reverse transcription. In some cases, a retroelement may successfully integrate into the genome despite being hypermutated. Such events introduce unique sequences into the genome and are thus a source of genomic innovation. adenosine deaminases that act on RNA (ADARs) catalyze adenosine-to-inosine editing in double-stranded RNA, commonly formed by oppositely oriented retroelements. The RNA editing confers plasticity to the transcriptome by generating many transcript variants from a single genomic locus. If the editing produces a beneficial variant, the genome may maintain the locus that produces the RNA-edited transcript for its novel function. Here, we discuss how these two powerful editing mechanisms, which both target inserted retroelements, facilitate expedited genome evolution. © 2015 New York Academy of Sciences.

  12. Genome editing of Ralstonia eutropha using an electroporation-based CRISPR-Cas9 technique.

    PubMed

    Xiong, Bin; Li, Zhongkang; Liu, Li; Zhao, Dongdong; Zhang, Xueli; Bi, Changhao

    2018-01-01

    Ralstonia eutropha is an important bacterium for the study of polyhydroxyalkanoates (PHAs) synthesis and CO 2 fixation, which makes it a potential strain for industrial PHA production and attractive host for CO 2 conversion. Although the bacterium is not recalcitrant to genetic manipulation, current methods for genome editing based on group II introns or single crossover integration of a suicide plasmid are inefficient and time-consuming, which limits the genetic engineering of this organism. Thus, developing an efficient and convenient method for R. eutropha genome editing is imperative. An efficient genome editing method for R. eutropha was developed using an electroporation-based CRISPR-Cas9 technique. In our study, the electroporation efficiency of R. eutropha was found to be limited by its restriction-modification (RM) systems. By searching the putative RM systems in R. eutropha H16 using REBASE database and comparing with that in E. coli MG1655, five putative restriction endonuclease genes which are related to the RM systems in R. eutropha were predicated and disrupted. It was found that deletion of H16_A0006 and H16_A0008 - 9 increased the electroporation efficiency 1658 and 4 times, respectively. Fructose was found to reduce the leaky expression of the arabinose-inducible pBAD promoter, which was used to optimize the expression of cas9 , enabling genome editing via homologous recombination based on CRISPR-Cas9 in R. eutropha . A total of five genes were edited with efficiencies ranging from 78.3 to 100%. The CRISPR-Cpf1 system and the non-homologous end joining mechanism were also investigated, but failed to yield edited strains. We present the first genome editing method for R. eutropha using an electroporation-based CRISPR-Cas9 approach, which significantly increased the efficiency and decreased time to manipulate this facultative chemolithoautotrophic microbe. The novel technique will facilitate more advanced researches and applications of R. eutropha for PHA production and CO 2 conversion.

  13. Assessment of the American Joint Commission on Cancer 8th Edition Staging System for Patients with Pancreatic Neuroendocrine Tumors: A Surveillance, Epidemiology, and End Results analysis.

    PubMed

    Li, Xiaogang; Gou, Shanmiao; Liu, Zhiqiang; Ye, Zeng; Wang, Chunyou

    2018-03-01

    Although several staging systems have been proposed for pancreatic neuroendocrine tumors (pNETs), the optimal staging system remains unclear. Here, we aimed to assess the application of the newly revised 8th edition American Joint Committee on Cancer (AJCC) staging system for exocrine pancreatic carcinoma (EPC) to pNETs, in comparison with that of other staging systems. We identified pNETs patients from the Surveillance, Epidemiology, and End Results (SEER) database (2004-2014). Overall survival was analyzed using Kaplan-Meier curves with the log-rank test. The predictive accuracy of each staging system was assessed by the concordance index (c-index). Cox proportional hazards regression was conducted to calculate the impact of different stages. In total, 2424 patients with pNETs, including 2350 who underwent resection, were identified using SEER data. Patients with different stages were evenly stratified based on the 8th edition AJCC staging system for EPC. Kaplan-Meier curves were well separated in all patients and patients with resection using the 8th edition AJCC staging system for EPC. Moreover, the hazard ratio increased with worsening disease stage. The c-index of the 8th edition AJCC staging system for EPC was similar to that of the other systems. For pNETs patients, the 8th edition AJCC staging system for EPC exhibits good prognostic discrimination among different stages in both all patients and those with resection. © 2018 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  14. [Systems, boundaries and resources: the lexicographer Gerhard Wahrig (1923-1978) and the genesis of his project "dictionary as database"].

    PubMed

    Wahrig-Burfeind, Renate; Wahrig, Bettina

    2014-09-01

    Gerhard Wahrig's private archive has recently been retrieved by the authors and their siblings. We undertake a first survey of the unpublished material and concentrate on those aspects of Wahrig's bio-ergography which stand in relation to his life project "dictionary as database", realised shortly before his death. We argue that this project was conceived in the 1950s, while Wahrig was writing and editing dictionaries and encyclopedias for the Bibliographisches Institut in Leipzig. Wahrig, who had been a wireless operator in WWII, was well informed about the development of computers in West Germany. He was influenced both by Ferdinand de Saussure and by the discussion on language and structure in the Soviet Union. When he crossed the German/German border in 1959, he experienced mechanisms of exclusion before he could establish himself in the West as a lexicographer. We argue that the transfer of symbolic and human capital was problematic due to the cultural differences between the two Germanies. In the 1970s, he became a professor of General and Applied Linguistics. The project of a "dictionary as database" was intended both as a basis for extensive empirical research on the semantic structure of natural languages and as a working tool for the average user of the German language. Due to his untimely death, he could not pursue his idea of exploring semantic networks.

  15. Integrated database for rapid mass movements in Norway

    NASA Astrophysics Data System (ADS)

    Jaedicke, C.; Lied, K.; Kronholm, K.

    2009-03-01

    Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously. An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development. Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord terrain of the Norwegian west coast, but major events are recorded all over the country. Snow avalanches account for most fatalities, while large rock slides causing flood waves and huge quick clay slides are the most damaging individual events in terms of damage to infrastructure and property and for causing multiple fatalities. The quality of the data is strongly influenced by the personal engagement of local observers and varying observation routines. This database is a unique source for statistical analysis including, risk analysis and the relation between rapid mass movements and climate. The database of rapid mass movement events will also facilitate validation of national hazard and risk maps.

  16. National Geochronological Database

    USGS Publications Warehouse

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic information system (GIS) applications. The data are provided in .mdb (Microsoft Access), .xls (Microsoft Excel), and .txt (tab-separated value) formats. We also provide a single non-relational file that contains a subset of the data for ease of use.

  17. Structure and data consistency of a GIS database for geological risk analysis in S. Miguel Island (Azores)

    NASA Astrophysics Data System (ADS)

    Queiroz, G.; Goulart, C.; Gaspar, J. L.; Gomes, A.; Resendes, J. P.; Marques, R.; Gonçalves, P.; Silveira, D.; Valadão, P.

    2003-04-01

    The Geographic Information Systems (GIS) are becoming a major tool in the domain of geological hazard assessment and risk mitigation. When available, hazard and vulnerability data can easily be represented in a GIS and a great diversity of risk maps can be produced following the implementation of specific predicting models. A major difficulty for those that deal with GIS is to obtain high quality, well geo-referenced and validated data. This situation is particularly evident in the scope of risk analysis due to the diversity of data that need to be considered. In order to develop a coherent database for the geological risk analysis of the Azores archipelago it was decided to use the digital maps edited in 2001 by the Instituto Geográfico do Exército de Portugal (scale 1:25000), comprising altimetry, urban areas, roads and streams network. For the particular case of S. Miguel Island the information contained in these layers was revised and rectifications were made whenever needed. Moreover basic additional layers were added to the system, including counties and parishes administrative limits, agriculture and forested areas. For detailed studies all the edifices (e.g. houses, public buildings, monuments) are being individualized and characterized taking in account several parameters that can become crucial to assess their direct vulnerability to geological hazards (e.g. type of construction, number of floors, roof stability). Geological data obtained (1) through the interpretation of historical documents, (2) during recent fieldwork campaigns (e.g. mapping of volcanic centres and associated deposits, faults, dikes, soil degassing anomalies, landslides) and (3) by the existent monitoring networks (e.g. seismic, geodetic, fluid geochemistry) are also being digitised. The acquisition, storage and maintenance of all this information following the same criteria of quality are critical to guarantee the accuracy and consistency of the GIS database through time. In this work we notice the GIS-based methodologies aimed to assure the development of a GIS database directed to the geological risk analysis in S. Miguel Island. In a long-term programme the same strategy is being extended to the other Azorean islands.

  18. Sequence History Update Tool

    NASA Technical Reports Server (NTRS)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  19. [Bibliometric indexes of the editions, publishing the articles on the problems of morphology, and some bibliometric parameters of the authors of morphological publications].

    PubMed

    Shevliuk, N N

    2013-01-01

    The article presents a comparative assessment of some bibliometric parameters of national journals, publishing the articles on the problems of morphological scientific disciplines, and concise analysis of the publication activity of morphologists. The data are given on the application of bibliometric indexes for the evaluation of the scientific contribution of national researchers to the field of morphology. The information contained in the national database-- Russian Index of Scientific Citation, and that collected by means of selected overview of the national and foreign medical and biological journals, publishing the articles on various problems of morphological sciences during the last 20 years, served as the basis for the analysis. It is noted that the authors should consider the bibliometric indexes of the journals to which they submit their articles.

  20. A postmortem and future look at the personality disorders in DSM-5.

    PubMed

    Widiger, Thomas A

    2013-10-01

    It might seem difficult to describe the outcome of the proposals by the American Psychiatric Association's (APA) Personality and Personality Disorders Work Group (PPDWG) to be a success, given that all of the proposals were ultimately rejected. Nevertheless, one can interpret the result as a step forward, because the final outcome might not have been much different if a more conservative approach was adopted at the outset. The PPDWG did provide a significant contribution to the field through the provision of proposals that will likely generate a considerable body of informative research. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) effort and suggestions for the future are discussed with respect to magnitude of change, documentation of empirical support, and addressing opposition. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. Chado Controller: advanced annotation management with a community annotation system

    PubMed Central

    Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie

    2012-01-01

    Summary: We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. Availability: The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form Contact: valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22285827

  2. Education-stratified base-rate information on discrepancy scores within and between the Wechsler Adult Intelligence Scale--Third Edition and the Wechsler Memory Scale--Third Edition.

    PubMed

    Dori, Galit A; Chelune, Gordon J

    2004-06-01

    The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997a) and the Wechsler Memory Scale--Third Edition (WMS-III; D. Wechsler, 1997b) are 2 of the most frequently used measures in psychology and neuropsychology. To facilitate the diagnostic use of these measures in the clinical decision-making process, this article provides information on education-stratified, directional prevalence rates (i.e., base rates) of discrepancy scores between the major index scores for the WAIS-III, the WMS-III, and between the WAIS-III and WMS-III. To illustrate how such base-rate data can be clinically used, this article reviews the relative risk (i.e., odds ratio) of empirically defined "rare" cognitive deficits in 2 of the clinical samples presented in the WAIS-III--WMS-III Technical Manual (The Psychological Corporation, 1997). ((c) 2004 APA, all rights reserved)

  3. The LHEA PDP 11/70 graphics processing facility users guide

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.

  4. Welcome to the Land of the Navajo. A Book of Information about the Navajo Indians. Third Edition, 1972.

    ERIC Educational Resources Information Center

    Correll, J. Lee, Ed.; Watson, Editha L., Ed.

    Compiled and edited by the Museum and Research Department of the Navajo Tribe in 1972, the text provides information about the Navajo Indians and their vast reservation. Major areas covered include Navajo history and customs, religion, arts and crafts, Navajo tribal government and programs, Navajoland and places to go, 7 wonders of the Navajo…

  5. HIV Molecular Immunology 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusim, Karina; Korber, Bette Tina Marie; Barouch, Dan

    HIV Molecular Immunology is a companion volume to HIV Sequence Compendium. This publication, the 2014 edition, is the PDF version of the web-based HIV Immunology Database (http://www.hiv.lanl.gov/content/immunology/). The web interface for this relational database has many search options, as well as interactive tools to help immunologists design reagents and interpret their results. In the HIV Immunology Database, HIV-specific B-cell and T-cell responses are summarized and annotated. Immunological responses are divided into three parts, CTL, T helper, and antibody. Within these parts, defined epitopes are organized by protein and binding sites within each protein, moving from left to right through themore » coding regions spanning the HIV genome. We include human responses to natural HIV infections, as well as vaccine studies in a range of animal models and human trials. Responses that are not specifically defined, such as responses to whole proteins or monoclonal antibody responses to discontinuous epitopes, are summarized at the end of each protein section. Studies describing general HIV responses to the virus, but not to any specific protein, are included at the end of each part. The annotation includes information such as crossreactivity, escape mutations, antibody sequence, TCR usage, functional domains that overlap with an epitope, immune response associations with rates of progression and therapy, and how specific epitopes were experimentally defined. Basic information such as HLA specificities for T-cell epitopes, isotypes of monoclonal antibodies, and epitope sequences are included whenever possible. All studies that we can find that incorporate the use of a specific monoclonal antibody are included in the entry for that antibody. A single T-cell epitope can have multiple entries, generally one entry per study. Finally, maps of all defined linear epitopes relative to the HXB2 reference proteins are provided.« less

  6. An offline-online Web-GIS Android application for fast data acquisition of landslide hazard and risk

    NASA Astrophysics Data System (ADS)

    Olyazadeh, Roya; Sudmeier-Rieux, Karen; Jaboyedoff, Michel; Derron, Marc-Henri; Devkota, Sanjaya

    2017-04-01

    Regional landslide assessments and mapping have been effectively pursued by research institutions, national and local governments, non-governmental organizations (NGOs), and different stakeholders for some time, and a wide range of methodologies and technologies have consequently been proposed. Land-use mapping and hazard event inventories are mostly created by remote-sensing data, subject to difficulties, such as accessibility and terrain, which need to be overcome. Likewise, landslide data acquisition for the field navigation can magnify the accuracy of databases and analysis. Open-source Web and mobile GIS tools can be used for improved ground-truthing of critical areas to improve the analysis of hazard patterns and triggering factors. This paper reviews the implementation and selected results of a secure mobile-map application called ROOMA (Rapid Offline-Online Mapping Application) for the rapid data collection of landslide hazard and risk. This prototype assists the quick creation of landslide inventory maps (LIMs) by collecting information on the type, feature, volume, date, and patterns of landslides using open-source Web-GIS technologies such as Leaflet maps, Cordova, GeoServer, PostgreSQL as the real DBMS (database management system), and PostGIS as its plug-in for spatial database management. This application comprises Leaflet maps coupled with satellite images as a base layer, drawing tools, geolocation (using GPS and the Internet), photo mapping, and event clustering. All the features and information are recorded into a GeoJSON text file in an offline version (Android) and subsequently uploaded to the online mode (using all browsers) with the availability of Internet. Finally, the events can be accessed and edited after approval by an administrator and then be visualized by the general public.

  7. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  8. Marfan Database (second edition): software and database for the analysis of mutations in the human FBN1 gene.

    PubMed Central

    Collod-Béroud, G; Béroud, C; Adès, L; Black, C; Boxer, M; Brock, D J; Godfrey, M; Hayward, C; Karttunen, L; Milewicz, D; Peltonen, L; Richards, R I; Wang, M; Junien, C; Boileau, C

    1997-01-01

    Fibrillin is the major component of extracellular microfibrils. Mutations in the fibrillin gene on chromosome 15 (FBN1) were described at first in the heritable connective tissue disorder, Marfan syndrome (MFS). More recently, FBN1 has also been shown to harbor mutations related to a spectrum of conditions phenotypically related to MFS. These mutations are private, essentially missense, generally non-recurrent and widely distributed throughout the gene. To date no clear genotype/phenotype relationship has been observed excepted for the localization of neonatal mutations in a cluster between exons 24 and 32. The second version of the computerized Marfan database contains 89 entries. The software has been modified to accomodate new functions and routines. PMID:9016526

  9. CERES ERBE-like Level 3 Edition 3

    Atmospheric Science Data Center

    2013-07-10

    ... calibration information collected up to this point. The primary goal of this edition is to provide the most accurate and consistent ... for ground to flight beginning-of-mission spectral response function and radiometric gains calibration coefficients. Establishment of a ...

  10. Comparing NetCDF and SciDB on managing and querying 5D hydrologic dataset

    NASA Astrophysics Data System (ADS)

    Liu, Haicheng; Xiao, Xiao

    2016-11-01

    Efficiently extracting information from high dimensional hydro-meteorological modelling datasets requires smart solutions. Traditional methods are mostly based on files, which can be edited and accessed handily. But they have problems of efficiency due to contiguous storage structure. Others propose databases as an alternative for advantages such as native functionalities for manipulating multidimensional (MD) arrays, smart caching strategy and scalability. In this research, NetCDF file based solutions and the multidimensional array database management system (DBMS) SciDB applying chunked storage structure are benchmarked to determine the best solution for storing and querying 5D large hydrologic modelling dataset. The effect of data storage configurations including chunk size, dimension order and compression on query performance is explored. Results indicate that dimension order to organize storage of 5D data has significant influence on query performance if chunk size is very large. But the effect becomes insignificant when chunk size is properly set. Compression of SciDB mostly has negative influence on query performance. Caching is an advantage but may be influenced by execution of different query processes. On the whole, NetCDF solution without compression is in general more efficient than the SciDB DBMS.

  11. An Evolutionary Landscape of A-to-I RNA Editome across Metazoan Species

    PubMed Central

    Hung, Li-Yuan; Chen, Yen-Ju; Mai, Te-Lun; Chen, Chia-Ying; Yang, Min-Yu; Chiang, Tai-Wei; Wang, Yi-Da

    2018-01-01

    Abstract Adenosine-to-inosine (A-to-I) editing is widespread across the kingdom Metazoa. However, for the lack of comprehensive analysis in nonmodel animals, the evolutionary history of A-to-I editing remains largely unexplored. Here, we detect high-confidence editing sites using clustering and conservation strategies based on RNA sequencing data alone, without using single-nucleotide polymorphism information or genome sequencing data from the same sample. We thereby unveil the first evolutionary landscape of A-to-I editing maps across 20 metazoan species (from worm to human), providing unprecedented evidence on how the editing mechanism gradually expands its territory and increases its influence along the history of evolution. Our result revealed that highly clustered and conserved editing sites tended to have a higher editing level and a higher magnitude of the ADAR motif. The ratio of the frequencies of nonsynonymous editing to that of synonymous editing remarkably increased with increasing the conservation level of A-to-I editing. These results thus suggest potentially functional benefit of highly clustered and conserved editing sites. In addition, spatiotemporal dynamics analyses reveal a conserved enrichment of editing and ADAR expression in the central nervous system throughout more than 300 Myr of divergent evolution in complex animals and the comparability of editing patterns between invertebrates and between vertebrates during development. This study provides evolutionary and dynamic aspects of A-to-I editome across metazoan species, expanding this important but understudied class of nongenomically encoded events for comprehensive characterization. PMID:29294013

  12. Science and Technology Text Mining: Origins of Database Tomography and Multi-Word Phrase Clustering

    DTIC Science & Technology

    2003-08-15

    six decades to the pioneering work in: 1) lexicography of Hornby [1942] to account for co- occurrence knowledge, and 2) linguistics of De Saussure ...of Development in a Research Field," Scientometrics, Vol.19, No.1, 1990b. De Saussure , F., "Cours de Linguistique Generale," 4eme Edition, Librairie

  13. Rules for Merging MELVYL Records. Technical Report No. 6. Revised.

    ERIC Educational Resources Information Center

    Coyle, Karen

    The University of California Catalog and Periodicals databases each have over 20 separately contributing libraries, and records for the same work can enter the MELVYL system from different campus libraries. MELVYL's goal is to have one union record for each distinct edition of a work. To promote this goal, the University's Division of Library…

  14. Adverse drug reactions and adverse events of 33 varieties of traditional Chinese medicine injections on National Essential medicines List (2004 edition) of China: an overview on published literatures.

    PubMed

    Wang, Li; Yuan, Qiang; Marshall, Gareth; Cui, Xiaohua; Cheng, Lan; Li, Yuanyuan; Shang, Hongcai; Zhang, Boli; Li, Youping

    2010-05-01

    We conducted a literature review on adverse drug reactions (ADRs) related to 33 kinds of traditional Chinese medicine injections (CMIs) on China's National Essential medicines List (2004 edition). We aimed to retrieve basic ADR information, identify trends related to CMIs, and provide evidence for the research, development, and application of CMIs. We electronically searched the Chinese Biomedical Literature Database (CBM, January 1978-April 2009), the China National Knowledge Infrastructure Database (CNKI, January 1979-April 2009), the Chinese Science and Technology Periodical Database (January 1989-April 2009) and the Traditional Chinese Medicine Database (January 1984-April 2009). We used the terms of 'adverse drug reaction', 'adverse event', 'side effects', 'side reaction', 'toxicity', and 'Chinese medicine injections', as well as the names of the 33 CMIs to search. We also collected CMI-related ADR reports and regulations from the Chinese Food and Drug Administration's 'Newsletter of Adverse Drug Reactions' (Issue 1 to 22). Then we descriptively analyzed all the articles by year published, periodical, and study design. We also analyzed regulations relevants to ADRs. (1) We found 5405 relevant citations, of which 1010 studies met the eligibility criteria. (2) The rate of publishing of research articles on CMI-linked ADRs has risen over time. (3) The included 1010 articles were scattered among 297 periodicals. Of these, 55 journals on pharmaceutical medicine accounted for 39.5% of the total (399/1010); the 64 journals on traditional Chinese medicine, accounted for only 19.5% (197/1010). Only 22 periodicals with relevant articles were included on the core journals of the Beijing University List (2008 edition); these published 129 articles (12.8% of the included articles). (4) The relevant articles consisted of 348 case reports (34.5%), 254 case series (25.2%), 119 reviews (11.8%), 116 randomized controlled trials (11.5%), 78 cross-sectional studies (7.7%), 61 literature analyses of ADR (6.0%), and 28 non-randomized controlled clinical studies (2.8%). (5) Three journals, Adverse Drug Reactions Journal, China Medical Herald, and Chinese Pharmaceuticals, together published 12.3% of the included literature. (6) The most commonly-reported CMI-related ADRs were to Shuanghuanglian, Qingkailing, and Yuxingcao injections, each of which had ADRs mentioned in more than 200 articles. Four of the five CMIs with the most ADR reports (Shuanghuanglian, Ciwujia, Yuxingcao, and Yinzhihuang injections) had been suspended use or sale in the market. (1) Articles published on CMI-related ADRs increased over time, but overall the research is of low quality and is scattered through a large number of sources. (2) Four CMIs (Shuanghuanglian, Ciwujia, Yuxingcao, and Yinzhihuang injections) had been suspended for clinical use or sale. (3) There is an urgent need for a clear standard to grade ADRs of CMIs in order to better risk manage. (4) It is necessary to continually re-evaluate the safety of CMIs and to promote rational use of CMIs. © 2010 Blackwell Publishing Asia Pty Ltd and Chinese Cochrane Center, West China Hospital of Sichuan University.

  15. Geologic map of the eastern part of the Challis National Forest and vicinity, Idaho

    USGS Publications Warehouse

    Wilson, A.B.; Skipp, B.A.

    1994-01-01

    The paper version of the Geologic Map of the eastern part of the Challis National Forest and vicinity, Idaho was compiled by Anna Wilson and Betty Skipp in 1994. The geology was compiled on a 1:250,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a GIS database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  16. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less

  17. RNA editing site recognition in heterologous plant mitochondria.

    PubMed

    Choury, David; Araya, Alejandro

    2006-12-01

    RNA editing is a process that modifies the information content of mitochondrial messenger RNAs in flowering plants changing specific cytosine residues into uridine. To gain insight into editing site recognition, we used electroporation to introduce engineered wheat (Triticum aestivum) or potato (Solanum tuberosum) mitochondrial cox2 genes, and an atp9-containing chimeric gene, into non-cognate mitochondria, and observed the efficiency of editing in these contexts. Both wheat and potato mitochondria were able to express "foreign" constructs, and their products were properly spliced. Seventeen and twelve editing sites are present in the coding regions of wheat and potato cox2 transcripts, respectively. Eight are common to both plants, whereas nine are specific to wheat, and four to potato. An analogous situation is found for the atp9 mRNA coding regions from these species. We found that both mitochondria were able to recognize sites that are already present as T at the genomic level, making RNA editing unnecessary for that specific residue in the cognate organelle. Our results demonstrate that non-cognate mitochondria are able to edit residues that are not edited in their own transcripts, and support the hypothesis that the same trans-acting factor may recognize several editing sites.

  18. Primordial germ cell-mediated transgenesis and genome editing in birds.

    PubMed

    Han, Jae Yong; Park, Young Hyun

    2018-01-01

    Transgenesis and genome editing in birds are based on a unique germline transmission system using primordial germ cells (PGCs), which is quite different from the mammalian transgenic and genome editing system. PGCs are progenitor cells of gametes that can deliver genetic information to the next generation. Since avian PGCs were first discovered in nineteenth century, there have been numerous efforts to reveal their origin, specification, and unique migration pattern, and to improve germline transmission efficiency. Recent advances in the isolation and in vitro culture of avian PGCs with genetic manipulation and genome editing tools enable the development of valuable avian models that were unavailable before. However, many challenges remain in the production of transgenic and genome-edited birds, including the precise control of germline transmission, introduction of exogenous genes, and genome editing in PGCs. Therefore, establishing reliable germline-competent PGCs and applying precise genome editing systems are critical current issues in the production of avian models. Here, we introduce a historical overview of avian PGCs and their application, including improved techniques and methodologies in the production of transgenic and genome-edited birds, and we discuss the future potential applications of transgenic and genome-edited birds to provide opportunities and benefits for humans.

  19. ES4 Terra+Aqua Ed4

    Atmospheric Science Data Center

    2018-06-26

    ... Detailed CERES ERBElike Level 3 (ES-4/ES-9) Product Information Collection Guide:  ES4 CG R1V1  (PDF) Data ... Edition2 for TRMM; and Edition1 for NPP) are approved for science publications.  SCAR-B Block:  ...

  20. ES4 Aqua-Xtrk Ed4

    Atmospheric Science Data Center

    2018-06-27

    ... Detailed CERES ERBElike Level 3 (ES-4/ES-9) Product Information Collection Guide:  ES4 CG R1V1  (PDF) Data ... Edition2 for TRMM; and Edition1 for NPP) are approved for science publications.  SCAR-B Block:  ...

  1. ES4 Terra-Xtrk Ed4

    Atmospheric Science Data Center

    2018-06-27

    ... Detailed CERES ERBElike Level 3 (ES-4/ES-9) Product Information Collection Guide:  ES4 CG R1V1  (PDF) Data ... Edition2 for TRMM; and Edition1 for NPP) are approved for science publications.  SCAR-B Block:  ...

  2. ES9 Terra-Xtrk Ed4

    Atmospheric Science Data Center

    2018-06-27

    ... Detailed CERES ERBElike Level 3 (ES-4/ES-9) Product Information Collection Guide:  ES9 CG R1V1  (PDF) Data ... Edition2 for TRMM; and Edition1 for NPP) are approved for science publications. SCAR-B Block:  ...

  3. ES9 Aqua-Xtrk Ed4

    Atmospheric Science Data Center

    2018-06-27

    ... Detailed CERES ERBElike Level 3 (ES-4/ES-9) Product Information Collection Guide:  ES9 CG R1V1  (PDF) Data ... Edition2 for TRMM; and Edition1 for NPP) are approved for science publications. SCAR-B Block:  ...

  4. CERES ERBE-like Level 2 ES-8 Edition3

    Atmospheric Science Data Center

    2013-07-10

    ... calibration information collected up to this point. The primary goal of this edition is to provide the most accurate and consistent ... the ground to flight beginning-of-mission spectral response function and radiometric gains calibration coefficients. Establishment of a ...

  5. Digital bedrock mapping at the Geological Survey of Norway: BGS SIGMA tool and in-house database structure

    NASA Astrophysics Data System (ADS)

    Gasser, Deta; Viola, Giulio; Bingen, Bernard

    2016-04-01

    Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight geologists. For the Orkanger project, some 2100 field observation points were collected by three geologists. Several advantages of the applied digital approach became clear during these projects: (1) The systematic collection of geological field data in a common format allows easy access and exchange of data among different geologists, (2) Easier access to background information such as geophysics and DEMS in the field, (3) Faster workflow from field data collection to final map product. Obvious disadvantages include: (1) Heavy(ish) and expensive hardware, (2) Battery life and other technical issues in the field, (3) Need for a central field observation point storage inhouse (large amounts of data!), and (4) Acceptance of- and training in a common workflow from all involved geologists.

  6. Effects of Contributor Experience on the Quality of Health-Related Wikipedia Articles.

    PubMed

    Holtz, Peter; Fetahu, Besnik; Kimmerle, Joachim

    2018-05-10

    Consulting the Internet for health-related information is a common and widespread phenomenon, and Wikipedia is arguably one of the most important resources for health-related information. Therefore, it is relevant to identify factors that have an impact on the quality of health-related Wikipedia articles. In our study we have hypothesized a positive effect of contributor experience on the quality of health-related Wikipedia articles. We mined the edit history of all (as of February 2017) 18,805 articles that were listed in the categories on the portal health & fitness in the English language version of Wikipedia. We identified tags within the articles' edit histories, which indicated potential issues with regard to the respective article's quality or neutrality. Of all of the sampled articles, 99 (99/18,805, 0.53%) articles had at some point received at least one such tag. In our analysis we only considered those articles with a minimum of 10 edits (10,265 articles in total; 96 tagged articles, 0.94%). Additionally, to test our hypothesis, we constructed contributor profiles, where a profile consisted of all the articles edited by a contributor and the corresponding number of edits contributed. We did not differentiate between rollbacks and edits with novel content. Nonparametric Mann-Whitney U-tests indicated a higher number of previously edited articles for editors of the nontagged articles (mean rank tagged 2348.23, mean rank nontagged 5159.29; U=9.25, P<.001). However, we did not find a significant difference for the contributors' total number of edits (mean rank tagged 4872.85, mean rank nontagged 5135.48; U=0.87, P=.39). Using logistic regression analysis with the respective article's number of edits and number of editors as covariates, only the number of edited articles yielded a significant effect on the article's status as tagged versus nontagged (dummy-coded; Nagelkerke R 2 for the full model=.17; B [SE B]=-0.001 [0.00]; Wald c 2 [1]=19.70; P<.001), whereas we again found no significant effect for the mere number of edits (Nagelkerke R 2 for the full model=.15; B [SE B]=0.000 [0.01]; Wald c 2 [1]=0.01; P=.94). Our findings indicate an effect of contributor experience on the quality of health-related Wikipedia articles. However, only the number of previously edited articles was a predictor of the articles' quality but not the mere volume of edits. More research is needed to disentangle the different aspects of contributor experience. We have discussed the implications of our findings with respect to ensuring the quality of health-related information in collaborative knowledge-building platforms. ©Peter Holtz, Besnik Fetahu, Joachim Kimmerle. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.

  7. A-to-I RNA editing occurs at over a hundred million genomic sites, located in a majority of human genes.

    PubMed

    Bazak, Lily; Haviv, Ami; Barak, Michal; Jacob-Hirsch, Jasmine; Deng, Patricia; Zhang, Rui; Isaacs, Farren J; Rechavi, Gideon; Li, Jin Billy; Eisenberg, Eli; Levanon, Erez Y

    2014-03-01

    RNA molecules transmit the information encoded in the genome and generally reflect its content. Adenosine-to-inosine (A-to-I) RNA editing by ADAR proteins converts a genomically encoded adenosine into inosine. It is known that most RNA editing in human takes place in the primate-specific Alu sequences, but the extent of this phenomenon and its effect on transcriptome diversity are not yet clear. Here, we analyzed large-scale RNA-seq data and detected ∼1.6 million editing sites. As detection sensitivity increases with sequencing coverage, we performed ultradeep sequencing of selected Alu sequences and showed that the scope of editing is much larger than anticipated. We found that virtually all adenosines within Alu repeats that form double-stranded RNA undergo A-to-I editing, although most sites exhibit editing at only low levels (<1%). Moreover, using high coverage sequencing, we observed editing of transcripts resulting from residual antisense expression, doubling the number of edited sites in the human genome. Based on bioinformatic analyses and deep targeted sequencing, we estimate that there are over 100 million human Alu RNA editing sites, located in the majority of human genes. These findings set the stage for exploring how this primate-specific massive diversification of the transcriptome is utilized.

  8. Clinical and Pathological Staging Validation in the Eighth Edition of the TNM Classification for Lung Cancer: Correlation between Solid Size on Thin-Section Computed Tomography and Invasive Size in Pathological Findings in the New T Classification.

    PubMed

    Aokage, Keiju; Miyoshi, Tomohiro; Ishii, Genichiro; Kusumoto, Masahiro; Nomura, Shogo; Katsumata, Shinya; Sekihara, Keigo; Hishida, Tomoyuki; Tsuboi, Masahiro

    2017-09-01

    The aim of this study was to validate the new eighth edition of the TNM classification and to elucidate whether radiological solid size corresponds to pathological invasive size incorporated in this T factor. We analyzed the data on 1792 patients who underwent complete resection from 2003 to 2011 at the National Cancer Center Hospital East, Japan. We reevaluated preoperative thin-section computed tomography (TSCT) to determine solid size and pathological invasive size using the fourth edition of the WHO classification and reclassified them according to the new TNM classification. The discriminative power of survival curves by the seventh edition was compared with that by the eighth edition by using concordance probability estimates and Akaike's information criteria calculated using a univariable Cox regression model. Pearson's correlation coefficient was calculated to elucidate the correlation between radiological solid size using TSCT and pathological invasive size. The overall survival curves in the eighth edition were well distinct at each clinical and pathological stage. The 5-year survival rates of patients with clinical and pathological stage 0 newly defined were both 100%. The concordance probability estimate and Akaike's information criterion values of the eighth edition were higher than those of the seventh edition in discriminatory power for overall survival. Solid size on TSCT scan and pathological invasive size showed a positive linear relationship, and Pearson's correlation coefficient was calculated as 0.83, which indicated strong correlation. This TNM classification will be feasible regarding patient survival, and radiological solid size correlates significantly with pathological invasive size as a new T factor. Copyright © 2017 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  9. User's Guide to the Water-Analysis Screening Tool (WAST): A Tool for Assessing Available Water Resources in Relation to Aquatic-Resource Uses

    USGS Publications Warehouse

    Stuckey, Marla H.; Kiesler, James L.

    2008-01-01

    A water-analysis screening tool (WAST) was developed by the U.S. Geological Survey, in partnership with the Pennsylvania Department of Environmental Protection, to provide an initial screening of areas in the state where potential problems may exist related to the availability of water resources to meet current and future water-use demands. The tool compares water-use information to an initial screening criteria of the 7-day, 10-year low-flow statistic (7Q10) resulting in a screening indicator for influences of net withdrawals (withdrawals minus discharges) on aquatic-resource uses. This report is intended to serve as a guide for using the screening tool. The WAST can display general basin characteristics, water-use information, and screening-indicator information for over 10,000 watersheds in the state. The tool includes 12 primary functions that allow the user to display watershed information, edit water-use and water-supply information, observe effects downstream from edited water-use information, reset edited values to baseline, load new water-use information, save and retrieve scenarios, and save output as a Microsoft Excel spreadsheet.

  10. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  11. Neuronal Adaptive Mechanisms Underlying Intelligent Information Processing

    DTIC Science & Technology

    1981-05-01

    Physiol. 134: 451-470, 1956. J. Freud , S, Unpublished, untitled paper (1895) subsequently published in Freud , Sigmund - Standard Edition...of the Complete Psychological Works of Freud , edited by J. Strachey. New York, Macmillan 1: 281-287, 1964. Gallagher, J.P. and Shinnick-Gallagher

  12. Educators Resource Directory. 2005/06 Edition

    ERIC Educational Resources Information Center

    Grey House Publishing, 2005

    2005-01-01

    This updated edition of "Educators Resource Directory" has hundreds of new listings and thousands of updates and enhancements to existing listings. Plus, the Statistics & Rankings section has been updated with the most current information. "Educators Resource Directory" is designed to provide both educators and education…

  13. Indexing of Iranian Publications in Well-known Endodontic Textbooks: A Scientometric Analysis.

    PubMed

    Kakooei, Sina; Mostafavi, Mahshid; Parirokh, Masoud; Asgary, Saeed

    2016-01-01

    Quoting an article in well-known textbooks is held as a credit for that paper. The numbers of Iranian publications mentioned in endodontic textbooks have increased during recent years. The aim of this investigation was to evaluate the number of Iranian articles quoted in eminent endodontic textbooks. Three known textbooks (Ingle's Endodontics, Seltzer and Bender's Dental Pulp and Cohen's Pathways of the Pulp) were chosen and all the editions of the textbooks since 2000 were investigated for quoted Iranian publications. Only Iranian authors with affiliations from a domestic university were chosen. All references at the end of each chapter were read by hand searching, and results were noted. The trend and percentage of Iranian publications in different editions of the textbooks were also calculated. The number of citations of these publications in Google Scholar and Scopus databases were also obtained. The number of Iranian publications in all well-known textbooks have notably increased since 2000. The number and percentage of Iranian publications in the latest edition of Cohen's Pathways of the Pulp was higher compared to other textbooks as well as the previous edition of the same text. Number and percentage of Iranian publications in the field of endodontics in all three textbooks have remarkably increased since 2000.

  14. The Art of Electronics - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Horowitz, Paul; Hill, Winfield

    1989-09-01

    This is the thoroughly revised and updated second edition of the hugely successful The Art of Electronics. Widely accepted as the single authoritative text and reference on electronic circuit design, both analog and digital, the original edition sold over 125,000 copies worldwide and was translated into eight languages. The book revolutionized the teaching of electronics by emphasizing the methods actually used by citcuit designers - a combination of some basic laws, rules to thumb, and a large nonmathematical treatment that encourages circuit values and performance. The new Art of Electronics retains the feeling of informality and easy access that helped make the first edition so successful and popular. It is an ideal first textbook on electronics for scientists and engineers and an indispensable reference for anyone, professional or amateur, who works with electronic circuits. The best self-teaching book and reference book in electronics Simply indispensable, packed with essential information for all scientists and engineers who build electronic circuits Totally rewritten chapters on microcomputers and microprocessors The first edition of this book has sold over 100,000 copies in seven years, it has a market in virtually all research centres where electronics is important

  15. DynGO: a tool for visualizing and mining of Gene Ontology and its associations

    PubMed Central

    Liu, Hongfang; Hu, Zhang-Zhi; Wu, Cathy H

    2005-01-01

    Background A large volume of data and information about genes and gene products has been stored in various molecular biology databases. A major challenge for knowledge discovery using these databases is to identify related genes and gene products in disparate databases. The development of Gene Ontology (GO) as a common vocabulary for annotation allows integrated queries across multiple databases and identification of semantically related genes and gene products (i.e., genes and gene products that have similar GO annotations). Meanwhile, dozens of tools have been developed for browsing, mining or editing GO terms, their hierarchical relationships, or their "associated" genes and gene products (i.e., genes and gene products annotated with GO terms). Tools that allow users to directly search and inspect relations among all GO terms and their associated genes and gene products from multiple databases are needed. Results We present a standalone package called DynGO, which provides several advanced functionalities in addition to the standard browsing capability of the official GO browsing tool (AmiGO). DynGO allows users to conduct batch retrieval of GO annotations for a list of genes and gene products, and semantic retrieval of genes and gene products sharing similar GO annotations. The result are shown in an association tree organized according to GO hierarchies and supported with many dynamic display options such as sorting tree nodes or changing orientation of the tree. For GO curators and frequent GO users, DynGO provides fast and convenient access to GO annotation data. DynGO is generally applicable to any data set where the records are annotated with GO terms, as illustrated by two examples. Conclusion We have presented a standalone package DynGO that provides functionalities to search and browse GO and its association databases as well as several additional functions such as batch retrieval and semantic retrieval. The complete documentation and software are freely available for download from the website . PMID:16091147

  16. Comprehensive Common Operating Picture (COP) for Disaster Response

    DTIC Science & Technology

    2012-05-17

    socio-cultural influences such as beliefs and values to name a few. In his book Beyond the Information Given, Jerome Bruner discusses veridicality...information needs, which is done by 45 Jerome S. Bruner , Selected, edited, and introduced by Jeremy M. Anglin, Contributors with Jerome S. Bruner to papers...Port_Angeles_CAN.ppt (accessed December 20, 2011). Selected, Jerome S. Bruner ., edited, and introduced by Jeremy M. Anglin. Contributors with Jerome S. Bruner

  17. Forty-first supplement to the American Ornithologists' Union Check-list of North American birds

    USGS Publications Warehouse

    Banks, R.C.; Fitzpatrick, J.W.; Howell, T.R.; Johnson, N.K.; Monroe, B.L.; Ouellet, H.; Remsen, J.V.; Storer, R.W.

    1997-01-01

    This seventh supplement after the publication of the 6th edition (1983) of the AOU Check-list of North American Birds includes taxonomic and nomenclatural changes adopted by the Committee on Classification and Nomenclature between 15 March 1995 and 15 March 1997. Because this will be the last supplement before the publication of the 7th edition of the Check-list, it also summarizes other decisions made by the Committee since 1983 that were not intended to affect the 6th edition but rather were to lay the foundation for its successor. Most of those decisions relate to sequence or rank of certain taxonomic categories. The Committee believes that compendia such as the Check-list are not appropriate places for the first appearance of novel taxonomic treatments or rearrangements. Therefore, we take the opportunity of this supplement to inform you of the ways in which the 7th edition will differ from the 6th. The style of this supplement differs from that of the previous six because they were designed to provide detailed changes to the text in the 6th edition; this one also is to provide information on how the 7th edition will differ from the 6th. Many details on reasons for the change will be discussed in the Preface or text of the new volume.

  18. A Web Tool for Generating High Quality Machine-readable Biological Pathways.

    PubMed

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S

    2017-02-08

    PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to generate over 2,000 pathway diagrams, which are now found in many online databases including HMDB, DrugBank, SMPDB, and ECMDB.

  19. GEOGRAPHIC INFORMATION SYSTEM APPROACH FOR PLAY PORTFOLIOS TO IMPROVE OIL PRODUCTION IN THE ILLINOIS BASIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beverly Seyler; John Grube

    2004-12-10

    Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less

  20. Prediction of constitutive A-to-I editing sites from human transcriptomes in the absence of genomic sequences

    PubMed Central

    2013-01-01

    Background Adenosine-to-inosine (A-to-I) RNA editing is recognized as a cellular mechanism for generating both RNA and protein diversity. Inosine base pairs with cytidine during reverse transcription and therefore appears as guanosine during sequencing of cDNA. Current approaches of RNA editing identification largely depend on the comparison between transcriptomes and genomic DNA (gDNA) sequencing datasets from the same individuals, and it has been challenging to identify editing candidates from transcriptomes in the absence of gDNA information. Results We have developed a new strategy to accurately predict constitutive RNA editing sites from publicly available human RNA-seq datasets in the absence of relevant genomic sequences. Our approach establishes new parameters to increase the ability to map mismatches and to minimize sequencing/mapping errors and unreported genome variations. We identified 695 novel constitutive A-to-I editing sites that appear in clusters (named “editing boxes”) in multiple samples and which exhibit spatial and dynamic regulation across human tissues. Some of these editing boxes are enriched in non-repetitive regions lacking inverted repeat structures and contain an extremely high conversion frequency of As to Is. We validated a number of editing boxes in multiple human cell lines and confirmed that ADAR1 is responsible for the observed promiscuous editing events in non-repetitive regions, further expanding our knowledge of the catalytic substrate of A-to-I RNA editing by ADAR enzymes. Conclusions The approach we present here provides a novel way of identifying A-to-I RNA editing events by analyzing only RNA-seq datasets. This method has allowed us to gain new insights into RNA editing and should also aid in the identification of more constitutive A-to-I editing sites from additional transcriptomes. PMID:23537002

  1. Development of management information system for land in mine area based on MapInfo

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu

    2008-10-01

    MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.

  2. Data set of world phosphate mines, deposits, and occurrences: Part A. geologic data; Part B. location and mineral economic data

    USGS Publications Warehouse

    Chernoff, Carlotta B.; Orris, G.J.

    2002-01-01

    An inventory of more than 1,600 world phosphate mines, deposits, and occurrences was compiled from smaller data sets collected as part of multiple research efforts by Carlotta Chernoff, University of Arizona, and Greta Orris, U.S. Geological Survey. These data have been utilized during studies of black shale depositional environments and to construct phosphate deposit models. The compiled data have been edited for consistency and additional location information has been added where possible. The database of compiled phosphate information is being released in two sections; the geologic data in one section and the location and mineral economic data in the second. This report, U.S. Geological Survey Open-File Report 02–156–A, contains the geologic data and is best used with the complimentary data contained in Open-File Report 02–156–B. U.S. Geological Survey Open-File Report 02–156–B contains commodity data, location and analytical data, a variety of mineral economic data, reference information, and pointers to related records in the U.S. Geological Survey National mineral databases—MASMILS and MRDS.

  3. PDB Editor: a user-friendly Java-based Protein Data Bank file editor with a GUI.

    PubMed

    Lee, Jonas; Kim, Sung Hou

    2009-04-01

    The Protein Data Bank file format is the format most widely used by protein crystallographers and biologists to disseminate and manipulate protein structures. Despite this, there are few user-friendly software packages available to efficiently edit and extract raw information from PDB files. This limitation often leads to many protein crystallographers wasting significant time manually editing PDB files. PDB Editor, written in Java Swing GUI, allows the user to selectively search, select, extract and edit information in parallel. Furthermore, the program is a stand-alone application written in Java which frees users from the hassles associated with platform/operating system-dependent installation and usage. PDB Editor can be downloaded from http://sourceforge.net/projects/pdbeditorjl/.

  4. Strengthening Family Resilience, Second Edition

    ERIC Educational Resources Information Center

    Walsh, Froma

    2006-01-01

    In a fully revised, updated, and expanded second edition, this informative clinical resource and text presents Froma Walsh's family resilience framework for intervention and prevention with clients dealing with adversity. Drawing on extensive research and clinical experience, the author describes key processes in resilience for practitioners to…

  5. College Student Press Law. Second Edition.

    ERIC Educational Resources Information Center

    Trager, Robert; Dickerson, Donna L.

    This second edition of a monograph provides updated information on court decisions concerning college student publications and underground newspapers to acquaint advisers, administrators, and students with college student press law. Chapters of the monograph examine freedom of speech on the college campus; the relationship between colleges and…

  6. The Belgian Union Catalogue of Periodicals

    ERIC Educational Resources Information Center

    Goedeme, G.; And Others

    1976-01-01

    Describes the edition, on computer output microfiche, of the supplement to the 1965 Union catalogue of foreign periodicals in Belgian and Luxemburgian libraries and documentation centers. The microfiches contain location information of 28,000 periodicals in 300 libraries and are edited in a rich typography. (Author)

  7. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  8. Schools and Data: The Educator's Guide for Using Data to Improve Decision Making

    ERIC Educational Resources Information Center

    Creighton, Theodore B.

    2006-01-01

    Since the first edition of "Schools and Data", the No Child Left Behind Act has swept the country, and data-based decision making is no longer an option for educators. Today's educational climate makes it imperative for all schools to collect data and use statistical analysis to help create clear goals and recognize strategies for…

  9. The Wiki as a Virtual Space for Qualitative Data Collection

    ERIC Educational Resources Information Center

    Castanos, Carolina; Piercy, Fred P.

    2010-01-01

    The authors make a case for using wiki technology in qualitative research. A wiki is an online database that allows users to create, edit, and/or reflect on the content of a web page. Thus, wiki technology can support qualitative research that attempts to understand the shared thinking of participants. To illustrate the use of the wiki for this…

  10. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    NASA Technical Reports Server (NTRS)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  11. [The long pilgrimage of Spanish biomedical journals toward excellence. Who helps? Quality, impact and research merit].

    PubMed

    Alfonso, Fernando

    2010-03-01

    Biomedical journals must adhere to strict standards of editorial quality. In a globalized academic scenario, biomedical journals must compete firstly to publish the most relevant original research and secondly to obtain the broadest possible visibility and the widest dissemination of their scientific contents. The cornerstone of the scientific process is still the peer-review system but additional quality criteria should be met. Recently access to medical information has been revolutionized by electronic editions. Bibliometric databases such as MEDLINE, the ISI Web of Science and Scopus offer comprehensive online information on medical literature. Classically, the prestige of biomedical journals has been measured by their impact factor but, recently, other indicators such as SCImago SJR or the Eigenfactor are emerging as alternative indices of a journal's quality. Assessing the scholarly impact of research and the merits of individual scientists remains a major challenge. Allocation of authorship credit also remains controversial. Furthermore, in our Kafkaesque world, we prefer to count rather than read the articles we judge. Quantitative publication metrics (research output) and citations analyses (scientific influence) are key determinants of the scientific success of individual investigators. However, academia is embracing new objective indicators (such as the "h" index) to evaluate scholarly merit. The present review discusses some editorial issues affecting biomedical journals, currently available bibliometric databases, bibliometric indices of journal quality and, finally, indicators of research performance and scientific success. Copyright 2010 SEEN. Published by Elsevier Espana. All rights reserved.

  12. General Chemistry Collection for Students (CD-ROM), Abstract of Special Issue 16, 4th Edition

    NASA Astrophysics Data System (ADS)

    2000-07-01

    The General Chemistry Collection contains both new and previously published JCE Software programs that are intended for use by introductory-level chemistry students. These peer-reviewed programs for Macintosh and for Windows are available on a single CD-ROM for convenient distribution to and access by students, and the CD may be adopted for students to purchase as they would a textbook. General Chemistry Collection covers a broad range of topics providing students with interesting information, tutorials, and simulations that will be useful to them as they study chemistry for the first time. There are 22 programs included in the General Chemistry Collection 4th Edition. Their titles and the general chemistry topics they cover are listed in Table 1. Features in This Edition General Chemistry Collection, 4th edition includes:

    • Lessons for Introductory Chemistry and INQUAL-S, two new programs not previously published by JCE Software (abstracts appear below)
    • Writing Electron Dot Structures (1) and Viscosity Measurement: A Virtual Experiment for Windows (2), two programs published individually by JCE Software
    • Periodic Table Live! LE, a limited edition of Periodic Table Live!, 2nd Edition (3) (this replaces Chemistry Navigator (4) and Illustrated Periodic Table (5))
    • Many of the programs from previous editions (6)1
    Hardware and Software Requirements System requirements are given in Table 2. Some programs have additional requirements. See the individual program abstracts at JCE Online, or documentation included on the CD-ROM for more specific information. Licensing and Discounts for Adoptions The General Chemistry Collection is intended for use by individual students. Institutions and faculty members may adopt General Chemistry Collection 4th Edition as they would a textbook. We can arrange for CDs to be packaged with laboratory manuals or other course materials or to be sold for direct distribution to students through the campus bookstore. The cost per CD can be quite low when large numbers are ordered (as little as $3 each), making this a cost-effective method of allowing students access to the software they need whenever and wherever they desire. Other JCE Software CDs can also be adopted. Network licenses to distribute the software to your students via your local campus network can also be arranged. Contact us for details on purchasing multiple user licenses. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this card is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone; 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Table 1. Contents of the General Chemistry Collection, 4th Edition

  13. Profile: the Philippine Population Information Network.

    PubMed

    1991-06-01

    The profile of Philippine Population Information Network (POPIN) is described in this article as having changed management structure from the Population Center Foundation to the Government's Population Commission, Information Management and Research Division (IMRD) in 1989. This restructuring resulted in the transfer in 1990 of the Department of Social Welfare and Development to the Office of the President. POPIN also serves Asia/Pacific POPIN. POPCOM makes policy and coordinates and monitors population activities. POPIN's goal is to improve the flow and utilization of population information nationwide. The National Population Library was moved in 1989 to the POPCOM Central Office Building and became the Philippine Information Center. The collection includes 6000 books, 400 research reports, and 4000 other documents (brochures, reprints, conference materials, and so on); 42 video tapes about the Philippine population program and a cassette player are available. In 1989, 14 regional centers were set up in POPCOM regional offices and designated Regional Population Information Centers. There are also school-based information centers operating as satellite information centers. The Regional and school-based centers serve the purpose of providing technical information through collection development, cataloguing, classification, storage and retrieval, and circulation. The target users are policy makers, government and private research agencies, researchers, and faculty and students. Publications developed and produced by the Center include the 3rd Supplement of the Union Catalogue of Population Literature, the 1987-88 Annotated Bibliography of Philippine Population Literature (PPL), the forthcoming 1989-90 edition of the Annotated Bibliography of PPL, and a biyearly newsletter, POPINEWS. Microcomputers have been acquired for the Regional Centers, with the idea of computerizing POPIN. Computer upgrading is also being done within the IMRD to provide POPLINE CD--ROM capability. Central and regional staff have also had their skills upgraded; e.g., IMRD's staff in the use of Micro-ISIS software, which is used for developing databases and directories. Training is being conducted in the ESCAP database and directory grant program, and in information center management and desktop publishing. Linkages have been made with the local networks, which have contributed to the upgrading effort.

  14. SUMO: operation and maintenance management web tool for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa

    2014-08-01

    SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.

  15. Reference Guide to Non-combustion Technologies for Remediation of Persistent Organic Pollutants in Soil, Second Edition - 2010

    EPA Pesticide Factsheets

    This report is the second edition of the U.S. Environmental Protection Agency's (US EPA's) 2005 report and provides a high level summary of information on the applicability of existing and emerging noncombustion technologies for the remediation of...

  16. Teaching Reading Sourcebook, Second Edition

    ERIC Educational Resources Information Center

    Honig, Bill; Diamond, Linda; Gutlohn, Linda

    2008-01-01

    The "Teaching Reading Sourcebook, Second Edition" is a comprehensive reference about reading instruction. Organized according to the elements of explicit instruction (what? why? when? and how?), the "Sourcebook" includes both a research-informed knowledge base and practical sample lesson models. It teaches the key elements of an effective reading…

  17. The Community College Story. Third Edition

    ERIC Educational Resources Information Center

    Vaughan, George B.

    2006-01-01

    This concise history of community colleges touches on major themes, including open access and equity, comprehensiveness, community-based philosophy, commitment to teaching, and lifelong learning. The third edition includes revised text as well as updated statistical information, time line, reading list, and Internet resources. In the more than a…

  18. Genome editing comes of age.

    PubMed

    Kim, Jin-Soo

    2016-09-01

    Genome editing harnesses programmable nucleases to cut and paste genetic information in a targeted manner in living cells and organisms. Here, I review the development of programmable nucleases, including zinc finger nucleases (ZFNs), TAL (transcription-activator-like) effector nucleases (TALENs) and CRISPR (cluster of regularly interspaced palindromic repeats)-Cas9 (CRISPR-associated protein 9) RNA-guided endonucleases (RGENs). I specifically highlight the key advances that set the foundation for the rapid and widespread implementation of CRISPR-Cas9 genome editing approaches that has revolutionized the field.

  19. Health information needs, source preferences and engagement behaviours of women with metastatic breast cancer across the care continuum: protocol for a scoping review.

    PubMed

    Tucker, Carol A; Martin, M Pilar; Jones, Ray B

    2017-02-17

    The health information needs, information source preferences and engagement behaviours of women with metastatic breast cancer (mBC) depend on personal characteristics such as education level, prior knowledge, clinical complications, comorbidities and where they are in the cancer journey. A thorough understanding of the information behaviours of women living with mBC is essential to the provision of optimal care. A preliminary literature review suggests that there is little research on this topic, but that there may be lessons from a slightly broader literature. This review will identify what is known and what is not known about the health information needs, acquisition and influences of women with mBC across the care continuum. Findings will help to identify research needs and specific areas where in-depth systematic reviews may be feasible, as well as inform evidence-based interventions to address the health information needs of female patients with mBC with different demographics and characteristics and across the mBC journey. A scoping review will be performed using the guidelines of Arksey and O'Malley as updated by subsequent authors to systematically search scientific and grey literature for articles in English that discuss the health information needs, source preferences, engagement styles, and associated personal and medical attributes of women ≥18 years living with mBC at different stages of the disease course. A variety of databases (including Cumulative Index to Nursing and Allied Health Literature (CINAHL), PubMed, Excerpta Medica Database (EMBASE), Academic Search Premier, Cochrane Database of Systematic Reviews, PsycINFO, Health Source: Nursing/Academic Edition, and PQDT Open), oncology, patient advocacy and governmental websites will be searched from inception to present day. Research and non-research literature will be included; no study designs will be excluded. The six-stage Arksey and O'Malley scoping review methodological framework involves: (1) identifying the research question; (2) searching for relevant studies; (3) selecting studies; (4) charting the data; (5) collating, summarising and reporting the results; and (6) consulting with stakeholders to inform or validate study findings (optional). Data will be extracted and analysed using a thematic chart and descriptive content analysis. Being a secondary analysis, this research will not require ethics approval. Results will be disseminated through patient support organisations and websites and publications targeting healthcare professionals, advocates and patients. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  20. Model for Codon Position Bias in RNA Editing

    NASA Astrophysics Data System (ADS)

    Liu, Tsunglin; Bundschuh, Ralf

    2005-08-01

    RNA editing can be crucial for the expression of genetic information via inserting, deleting, or substituting a few nucleotides at specific positions in an RNA sequence. Within coding regions in an RNA sequence, editing usually occurs with a certain bias in choosing the positions of the editing sites. In the mitochondrial genes of Physarum polycephalum, many more editing events have been observed at the third codon position than at the first and second, while in some plant mitochondria the second codon position dominates. Here we propose an evolutionary model that explains this bias as the basis of selection at the protein level. The model predicts a distribution of the three positions rather close to the experimental observation in Physarum. This suggests that the codon position bias in Physarum is mainly a consequence of selection at the protein level.

  1. A model for codon position bias in RNA editing

    NASA Astrophysics Data System (ADS)

    Bundschuh, Ralf; Liu, Tsunglin

    2006-03-01

    RNA editing can be crucial for the expression of genetic information via inserting, deleting, or substituting a few nucleotides at specific positions in an RNA sequence. Within coding regions in an RNA sequence, editing usually occurs with a certain bias in choosing the positions of the editing sites. In the mitochondrial genes of Physarum polycephalum, many more editing events have been observed at the third codon position than at the first and second, while in some plant mitochondria the second codon position dominates. Here we propose an evolutionary model that explains this bias as the basis of selection at the protein level. The model predicts a distribution of the three positions rather close to the experimental observation in Physarum. This suggests that the codon position bias in Physarum is mainly a consequence of selection at the protein level.

  2. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. Additionally, a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include over 40 papers from studies on the Hawaiian Islands and Mexico, data compilations from archeomagnetic studies and updates to the lake sediment dataset.

  3. Key Data on Education in Europe 2009

    ERIC Educational Resources Information Center

    Ranguelov, Stanislav; de Coster, Isabelle; Forsthuber, Bernadette; Noorani, Sogol; Ruffio, Philippe

    2009-01-01

    This seventh edition of "Key Data on Education in Europe" retains its main special feature which is the combination of statistical data and qualitative information to describe the organisation and functioning of education systems in Europe. The present 2009 edition maintains the subject-based structure defined by the previous one but…

  4. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... information concerning on-board system sensor failures and identification of edited data. Such support systems... driving today; (iv) Total hours on duty for the 7 consecutive day period, including today; (v) Total hours...-driver operation; (7) The on-board recording device/system identifies sensor failures and edited data...

  5. Theoretical Models and Processes of Reading. Fifth Edition

    ERIC Educational Resources Information Center

    Ruddell, Robert B., Ed.; Unrau, Norman J., Ed.

    2004-01-01

    For years this landmark book has helped educators, graduate students, and researchers shape their curriculum and stay informed about the latest developments in literacy research and instruction. This fifth edition continues the book's tradition of exemplary scholarship and remains a resource for the most innovative thinking in the field. Although…

  6. Economic Education Programs and Resources Directory. Second Edition.

    ERIC Educational Resources Information Center

    National Association of Manufacturers, Washington, DC.

    This directory provides a selective listing of information about economic education programs and resource activities of 299 corporations, organizations, universities, and colleges in the United States. This second edition of the directory is intended to stimulate interaction between business firms and schools and to help educators, members of the…

  7. Food Production, Management, and Services. Baking. Teacher Edition. Second Edition.

    ERIC Educational Resources Information Center

    Gibson, LeRoy

    These instructional materials are intended for a course on food production, management, and services involved in baking. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; 13 references; and a…

  8. Program Description: EDIT Program and Vendor Master Update, SWRL Financial System.

    ERIC Educational Resources Information Center

    Ikeda, Masumi

    Computer routines to edit input data for the Southwest Regional Laboratory's (SWRL) Financial System are described. The program is responsible for validating input records, generating records for further system processing, and updating the Vendor Master File--a file containing the information necessary to support the accounts payable and…

  9. Educational Programs That Work. Sixth Edition, Fall 1979.

    ERIC Educational Resources Information Center

    Far West Lab. for Educational Research and Development, San Francisco, CA.

    Intended to stimulate communication among the federal, state, intermediate, local, and postsecondary agencies that share responsibility for the improvement of education, this Department of Education catalog of exemplary educational programs describes all the projects dealt with in previous editions as well as providing information on more than 30…

  10. Great Decisions [and] Great Decisions Activity Book. 1994 Edition.

    ERIC Educational Resources Information Center

    Hoepli, Nancy L., Ed.

    This book discusses foreign policy issues and provides background information on current topics. This edition examines the following major issues: (1) "Conflict in Former Yugoslavia: Quest for Solutions" (Susan L. Woodward); (2) "South Africa: Forging a Democratic Union" (Jean Herskovits); (3) "Environmental Crisis in Former Soviet Bloc: Whose…

  11. New Literacies: Everyday Practices and Classroom Learning

    ERIC Educational Resources Information Center

    Lankshear, Colin; Knobel, Michelle

    2006-01-01

    The first edition of this popular book examined new literacies and new kinds of knowledge and classroom practices in the context of the massive growth of electronic information and communication technologies. This timely second edition discusses a fresh range of practices like blogging, fanfiction, mobile/wireless communications, and fan practices…

  12. Air Pollution Manual, Part 1--Evaluation. Second Edition.

    ERIC Educational Resources Information Center

    Giever, Paul M., Ed.

    Due to the great increase in technical knowledge and improvement in procedures, this second edition has been prepared to update existing information. Air pollution legislation is reviewed. Sources of air pollution are examined extensively. They are treated in terms of natural sources, man-made sources, metropolitan regional emissions, emission…

  13. A Short History of the Movies. Second Edition.

    ERIC Educational Resources Information Center

    Mast, Gerald

    This second edition of "A Short History of the Movies" includes expanded information and judgments about some of the "old masters" of film; an updated discussion of the "new masters"; and new sections on Japanese, Indian, and Czech cinema. In general, this history reveals significant trends in film history. American…

  14. Beyond the Wasteland: The Criticism of Broadcasting. Revised Edition.

    ERIC Educational Resources Information Center

    Smith, Robert Rutherford

    This second edition of "Beyond the Wasteland" provides an updated review of broadcast criticism. It offers teachers information to sharpen their own critical capacities and help for exploring media criticism with their students. It also provides suggestions and contextual insights for the mass media researcher. The book is divided into…

  15. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  16. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  17. Blended Synchronous Delivery Mode in Graduate Programs: A Literature Review and Its Implementation in the Master Teacher Program

    ERIC Educational Resources Information Center

    Lakhal, Sawsen; Bateman, Dianne; Bédard, Janie

    2017-01-01

    The aim of this study is to present a narrative literature review of advantages, challenges, and conditions for the success of blended synchronous course delivery mode. For this purpose, we searched the database EditLib and analyzed 16 existing papers from 2001 to 2016. The conditions for success were operationalized in the Master Teacher Program…

  18. IUCN/SSC Invasive Species Specialist Group (ISSG)

    Science.gov Websites

    ., Barrios, V., Darwall, W.R.T. and Numa, C. (Editors). 2014. in the Eastern Mediterranean. Cambridge, UK the International Conference on Island Invasives. Edited by C. R. Veitch, M. N. Clout, and D. R. Towns combining and harmonizing data on IAS from a wide range of different databases and networks. The aim is to

  19. Melanoma staging: Evidence-based changes in the American Joint Committee on Cancer eighth edition cancer staging manual.

    PubMed

    Gershenwald, Jeffrey E; Scolyer, Richard A; Hess, Kenneth R; Sondak, Vernon K; Long, Georgina V; Ross, Merrick I; Lazar, Alexander J; Faries, Mark B; Kirkwood, John M; McArthur, Grant A; Haydu, Lauren E; Eggermont, Alexander M M; Flaherty, Keith T; Balch, Charles M; Thompson, John F

    2017-11-01

    Answer questions and earn CME/CNE To update the melanoma staging system of the American Joint Committee on Cancer (AJCC) a large database was assembled comprising >46,000 patients from 10 centers worldwide with stages I, II, and III melanoma diagnosed since 1998. Based on analyses of this new database, the existing seventh edition AJCC stage IV database, and contemporary clinical trial data, the AJCC Melanoma Expert Panel introduced several important changes to the Tumor, Nodes, Metastasis (TNM) classification and stage grouping criteria. Key changes in the eighth edition AJCC Cancer Staging Manual include: 1) tumor thickness measurements to be recorded to the nearest 0.1 mm, not 0.01 mm; 2) definitions of T1a and T1b are revised (T1a, <0.8 mm without ulceration; T1b, 0.8-1.0 mm with or without ulceration or <0.8 mm with ulceration), with mitotic rate no longer a T category criterion; 3) pathological (but not clinical) stage IA is revised to include T1b N0 M0 (formerly pathologic stage IB); 4) the N category descriptors "microscopic" and "macroscopic" for regional node metastasis are redefined as "clinically occult" and "clinically apparent"; 5) prognostic stage III groupings are based on N category criteria and T category criteria (ie, primary tumor thickness and ulceration) and increased from 3 to 4 subgroups (stages IIIA-IIID); 6) definitions of N subcategories are revised, with the presence of microsatellites, satellites, or in-transit metastases now categorized as N1c, N2c, or N3c based on the number of tumor-involved regional lymph nodes, if any; 7) descriptors are added to each M1 subcategory designation for lactate dehydrogenase (LDH) level (LDH elevation no longer upstages to M1c); and 8) a new M1d designation is added for central nervous system metastases. This evidence-based revision of the AJCC melanoma staging system will guide patient treatment, provide better prognostic estimates, and refine stratification of patients entering clinical trials. CA Cancer J Clin 2017;67:472-492. © 2017 American Cancer Society. © 2017 American Cancer Society.

  20. Melanoma Staging: Evidence-Based Changes in the American Joint Committee on Cancer Eighth Edition Cancer Staging Manual

    PubMed Central

    Gershenwald, Jeffrey E.; Scolyer, Richard A.; Hess, Kenneth R.; Sondak, Vernon K.; Long, Georgina V.; Ross, Merrick I.; Lazar, Alexander J.; Faries, Mark B.; Kirkwood, John M.; McArthur, Grant A.; Haydu, Lauren E.; Eggermont, Alexander M. M.; Flaherty, Keith T.; Balch, Charles M.; Thompson, John F.

    2018-01-01

    To update the melanoma staging system of the American Joint Committee on Cancer (AJCC) a large database was assembled comprising >46,000 patients from 10 centers worldwide with stages I, II, and III melanoma diagnosed since 1998. Based on analyses of this new database, the existing seventh edition AJCC stage IV database, and contemporary clinical trial data, the AJCC Melanoma Expert Panel introduced several important changes to the Tumor, Nodes, Metastasis (TNM) classification and stage grouping criteria. Key changes in the eighth edition AJCC Cancer Staging Manual include: 1) tumor thickness measurements to be recorded to the nearest 0.1 mm, not 0.01 mm; 2) definitions of T1a and T1b are revised (T1a, <0.8 mm without ulceration; T1b, 0.8–1.0 mm with or without ulceration or <0.8 mm with ulceration), with mitotic rate no longer a T category criterion; 3) pathological (but not clinical) stage IA is revised to include T1b N0 M0 (formerly pathologic stage IB); 4) the N category descriptors “microscopic” and “macroscopic” for regional node metastasis are redefined as “clinically occult” and “clinically apparent”; 5) prognostic stage III groupings are based on N category criteria and T category criteria (ie, primary tumor thickness and ulceration) and increased from 3 to 4 subgroups (stages IIIA–IIID); 6) definitions of N subcategories are revised, with the presence of microsatellites, satellites, or in-transit metastases now categorized as N1c, N2c, or N3c based on the number of tumor-involved regional lymph nodes, if any; 7) descriptors are added to each M1 subcategory designation for lactate dehydrogenase (LDH) level (LDH elevation no longer upstages to M1c); and 8) a new M1d designation is added for central nervous system metastases. This evidence-based revision of the AJCC melanoma staging system will guide patient treatment, provide better prognostic estimates, and refine stratification of patients entering clinical trials. PMID:29028110

  1. The Red Book through the ages.

    PubMed

    Pickering, Larry K; Peter, Georges; Shulman, Stanford T

    2013-11-01

    The first edition of the Red Book was published in 1938. Since then, there have been numerous advances in the fields of infectious diseases and public health that have decreased morbidity and mortality of infants, children, and adolescents. Over the years, emerging pathogens and disease complexes have been described, sophisticated diagnostic techniques developed, advances in antimicrobial therapy have occurred, and immunizations have been implemented to prevent previously deadly diseases. Of the 18 diseases or organisms in the 1938 edition, 13 are now vaccine-preventable. Since inception of the Red Book, the aims of the editors have been to keep pace with these innovations and to continue to inform the medical community. These goals have made the Red Book a fundamental resource for pediatricians and other health care professionals in terms of guiding diagnosis, therapy, and prevention of infectious diseases. The list of 18 diseases or organisms originally described in the 1938 Red Book has expanded to include over 160 diseases or organisms in the 2012 edition. The pace of biomedical discovery, as well as the amount of information available and the number of methods for its delivery, will continue to accelerate in the future. Integration of information into future editions of the Red Book will ensure that practitioners continue to rely on the Red Book in its various electronic formats for clinical guidance and support.

  2. RNA Editome in Rhesus Macaque Shaped by Purifying Selection

    PubMed Central

    Yang, Xin-Zhuang; Tan, Bertrand Chin-Ming; Fang, Huaying; Liu, Chu-Jun; Shi, Mingming; Ye, Zhi-Qiang; Zhang, Yong E.; Deng, Minghua; Zhang, Xiuqin; Li, Chuan-Yun

    2014-01-01

    Understanding of the RNA editing process has been broadened considerably by the next generation sequencing technology; however, several issues regarding this regulatory step remain unresolved – the strategies to accurately delineate the editome, the mechanism by which its profile is maintained, and its evolutionary and functional relevance. Here we report an accurate and quantitative profile of the RNA editome for rhesus macaque, a close relative of human. By combining genome and transcriptome sequencing of multiple tissues from the same animal, we identified 31,250 editing sites, of which 99.8% are A-to-G transitions. We verified 96.6% of editing sites in coding regions and 97.5% of randomly selected sites in non-coding regions, as well as the corresponding levels of editing by multiple independent means, demonstrating the feasibility of our experimental paradigm. Several lines of evidence supported the notion that the adenosine deamination is associated with the macaque editome – A-to-G editing sites were flanked by sequences with the attributes of ADAR substrates, and both the sequence context and the expression profile of ADARs are relevant factors in determining the quantitative variance of RNA editing across different sites and tissue types. In support of the functional relevance of some of these editing sites, substitution valley of decreased divergence was detected around the editing site, suggesting the evolutionary constraint in maintaining some of these editing substrates with their double-stranded structure. These findings thus complement the “continuous probing” model that postulates tinkering-based origination of a small proportion of functional editing sites. In conclusion, the macaque editome reported here highlights RNA editing as a widespread functional regulation in primate evolution, and provides an informative framework for further understanding RNA editing in human. PMID:24722121

  3. Gene and protein nomenclature in public databases

    PubMed Central

    Fundel, Katrin; Zimmer, Ralf

    2006-01-01

    Background Frequently, several alternative names are in use for biological objects such as genes and proteins. Applications like manual literature search, automated text-mining, named entity identification, gene/protein annotation, and linking of knowledge from different information sources require the knowledge of all used names referring to a given gene or protein. Various organism-specific or general public databases aim at organizing knowledge about genes and proteins. These databases can be used for deriving gene and protein name dictionaries. So far, little is known about the differences between databases in terms of size, ambiguities and overlap. Results We compiled five gene and protein name dictionaries for each of the five model organisms (yeast, fly, mouse, rat, and human) from different organism-specific and general public databases. We analyzed the degree of ambiguity of gene and protein names within and between dictionaries, to a lexicon of common English words and domain-related non-gene terms, and we compared different data sources in terms of size of extracted dictionaries and overlap of synonyms between those. The study shows that the number of genes/proteins and synonyms covered in individual databases varies significantly for a given organism, and that the degree of ambiguity of synonyms varies significantly between different organisms. Furthermore, it shows that, despite considerable efforts of co-curation, the overlap of synonyms in different data sources is rather moderate and that the degree of ambiguity of gene names with common English words and domain-related non-gene terms varies depending on the considered organism. Conclusion In conclusion, these results indicate that the combination of data contained in different databases allows the generation of gene and protein name dictionaries that contain significantly more used names than dictionaries obtained from individual data sources. Furthermore, curation of combined dictionaries considerably increases size and decreases ambiguity. The entries of the curated synonym dictionary are available for manual querying, editing, and PubMed- or Google-search via the ProThesaurus-wiki. For automated querying via custom software, we offer a web service and an exemplary client application. PMID:16899134

  4. RISSC: a novel database for ribosomal 16S–23S RNA genes spacer regions

    PubMed Central

    García-Martínez, Jesús; Bescós, Ignacio; Rodríguez-Sala, Jesús Javier; Rodríguez-Valera, Francisco

    2001-01-01

    A novel database, under the acronym RISSC (Ribosomal Intergenic Spacer Sequence Collection), has been created. It compiles more than 1600 entries of edited DNA sequence data from the 16S–23S ribosomal spacers present in most prokaryotes and organelles (e.g. mitochondria and chloroplasts) and is accessible through the Internet (http://ulises.umh.es/RISSC), where systematic searches for specific words can be conducted, as well as BLAST-type sequence searches. Additionally, a characteristic feature of this region, the presence/absence and nature of tRNA genes within the spacer, is included in all the entries, even when not previously indicated in the original database. All these combined features could provide a useful documen­tation tool for studies on evolution, identification, typing and strain characterization, among others. PMID:11125084

  5. Cross-cultural validity of standardized motor development screening and assessment tools: a systematic review.

    PubMed

    Mendonça, Bianca; Sargent, Barbara; Fetters, Linda

    2016-12-01

    To investigate whether standardized motor development screening and assessment tools that are used to evaluate motor abilities of children aged 0 to 2 years are valid in cultures other than those in which the normative sample was established. This was a systematic review in which six databases were searched. Studies were selected based on inclusion/exclusion criteria and appraised for evidence level and quality. Study variables were extracted. Twenty-three studies representing six motor development screening and assessment tools in 16 cultural contexts met the inclusion criteria: Alberta Infant Motor Scale (n=7), Ages and Stages Questionnaire, 3rd edition (n=2), Bayley Scales of Infant and Toddler Development, 3rd edition (n=8), Denver Developmental Screening Test, 2nd edition (n=4), Harris Infant Neuromotor Test (n=1), and Peabody Developmental Motor Scales, 2nd edition (n=1). Thirteen studies found significant differences between the cultural context and normative sample. Two studies established reliability and/or validity of standardized motor development assessments in high-risk infants from different cultural contexts. Five studies established new population norms. Eight studies described the cross-cultural adaptation of a standardized motor development assessment. Standardized motor development assessments have limited validity in cultures other than that in which the normative sample was established. Their use can result in under- or over-referral for services. © 2016 Mac Keith Press.

  6. New spectroscopy in the HITRAN2016 database and its impact on atmospheric retrievals

    NASA Astrophysics Data System (ADS)

    Gordon, I.; Rothman, L. S.; Kochanov, R. V.; Tan, Y.; Toon, G. C.

    2017-12-01

    The HITRAN spectroscopic database is a backbone of the interpretation of spectral atmospheric retrievals and is an important input to the radiative transfer codes. The database is serving the atmospheric community for nearly half-a-century with every new edition being released every four years. The most recent release of the database is HITRAN2016 [1]. It consists of line-by-line lists, experimental absorption cross-sections, collision-induced absorption data and aerosol indices of refraction. In this presentation it will be stressed the importance of using the most recent edition of the database in the radiative transfer codes. The line-by-line lists for most of the HITRAN molecules were updated (and two new molecules added) in comparison with the previous compilation HITRAN2012 [2] that has been in use, along with some intermediate updates, since 2012. The extent of the updates ranges from updating a few lines of certain molecules to complete replacements of the lists and introduction of additional isotopologues. In addition, the amount of molecules in cross-sectional part of the database has increased dramatically from nearly 50 to over 300. The molecules covered by the HITRAN database are important in planetary remote sensing, environment monitoring (in particular, biomass burning detection), climate applications, industrial pollution tracking, atrophysics, and more. Taking advantage of the new structure and interface available at www.hitran.org [3] and the HITRAN Application Programming Interface [4] the amount of parameters has also been significantly increased, now incorporating, for instance, non-Voigt line profiles [5]; broadening by gases other than air and "self" [6]; and other phenomena, including line mixing. This is a very important novelty that needs to be properly introduced in the radiative transfer codes in order to advance accurate interpretation of the remote sensing retrievals. This work is supported by the NASA PDART (NNX16AG51G) and AURA (NNX 17AI78G) programs. References[1] I.E. Gordon et al, JQSRT in press (2017) http://doi.org/10.1016/j.jqsrt.2017.06.038. [2] L.S. Rothman et al, JQSRT 130, 4 (2013). [3] C. Hill et al, JQSRT 177, 4 (2016). [4] R.V. Kochanov et al, JQSRT 177, 15 (2016). [5] P. Wcisło et al., JQSRT 177, 75 (2016). [6] J. S. Wilzewski et al., JQSRT 168, 193 (2016).

  7. Problems experienced by informal caregivers of individuals with heart failure: An integrative review.

    PubMed

    Grant, Joan S; Graven, Lucinda J

    2018-04-01

    The purpose of this review was to examine and synthesize recent literature regarding problems experienced by informal caregivers when providing care for individuals with heart failure in the home. Integrative literature review. A review of current empirical literature was conducted utilizing PubMed, CINAHL, Embase, Sociological Abstracts, Social Sciences Full Text, PsycARTICLES, PsycINFO, Health Source: Nursing/Academic Edition, and Cochrane computerized databases. 19 qualitative, 16 quantitative, and 2 mixed methods studies met the inclusion criteria for review. Computerized databases were searched for a combination of subject terms (i.e., MeSH) and keywords related to informal caregivers, problems, and heart failure. The title and abstract of identified articles and reference lists were reviewed. Studies were included if they were published in English between January 2000 and December 2016 and examined problems experienced by informal caregivers in providing care for individuals with heart failure in the home. Studies were excluded if not written in English or if elements of caregiving in heart failure were not present in the title, abstract, or text. Unpublished and duplicate empirical literature as well as articles related to specific end-stage heart failure populations also were excluded. Methodology described by Cooper and others for integrative reviews of quantitative and qualitative research was used. Quality appraisal of the included studies was evaluated using the Joanna Briggs Institute critical appraisal tools for cross-sectional quantitative and qualitative studies. Informal caregivers experienced four key problems when providing care for individuals with heart failure in the home, including performing multifaceted activities and roles that evolve around daily heart failure demands; maintaining caregiver physical, emotional, social, spiritual, and financial well-being; having insufficient caregiver support; and performing caregiving with uncertainty and inadequate knowledge. Informal caregivers of individuals with heart failure experience complex problems in the home when providing care which impact all aspects of their lives. Incorporating advice from informal caregivers of individuals with heart failure will assist in the development of interventions to reduce negative caregiver outcomes. Given the complex roles in caring for individuals with heart failure, multicomponent interventions are potentially promising in assisting informal caregivers in performing these roles. Published by Elsevier Ltd.

  8. Guide RNA selection for CRISPR-Cas9 transfections in Plasmodium falciparum.

    PubMed

    Ribeiro, Jose M; Garriga, Meera; Potchen, Nicole; Crater, Anna K; Gupta, Ankit; Ito, Daisuke; Desai, Sanjay A

    2018-06-12

    CRISPR-Cas9 mediated genome editing is addressing key limitations in the transfection of malaria parasites. While this method has already simplified the needed molecular cloning and reduced the time required to generate mutants in the human pathogen Plasmodium falciparum, optimal selection of required guide RNAs and guidelines for successful transfections have not been well characterized, leading workers to use time-consuming trial and error approaches. We used a genome-wide computational approach to create a comprehensive and publicly accessible database of possible guide RNA sequences in the P. falciparum genome. For each guide, we report on-target efficiency and specificity scores as well as information about the genomic site relevant to optimal design of CRISPR-Cas9 transfections to modify, disrupt, or conditionally knockdown any gene. As many antimalarial drug and vaccine targets are encoded by multigene families, we also developed a new paralog specificity score that should facilitate modification of either a single family member of interest or multiple paralogs that serve overlapping roles. Finally, we tabulated features of successful transfections in our laboratory, providing broadly useful guidelines for parasite transfections. Molecular studies aimed at understanding parasite biology or characterizing drug and vaccine targets in P. falciparum should be facilitated by this comprehensive database. Published by Elsevier Ltd.

  9. An integrated tool for the diagnosis of voice disorders.

    PubMed

    Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Osma-Ruiz, Víctor; Aguilera-Navarro, Santiago; Gómez-Vilda, Pedro

    2006-04-01

    A PC-based integrated aid tool has been developed for the analysis and screening of pathological voices. With it the user can simultaneously record speech, electroglottographic (EGG), and videoendoscopic signals, and synchronously edit them to select the most significant segments. These multimedia data are stored on a relational database, together with a patient's personal information, anamnesis, diagnosis, visits, explorations and any other comment the specialist may wish to include. The speech and EGG waveforms are analysed by means of temporal representations and the quantitative measurements of parameters such as spectrograms, frequency and amplitude perturbation measurements, harmonic energy, noise, etc. are calculated using digital signal processing techniques, giving an idea of the degree of hoarseness and quality of the voice register. Within this framework, the system uses a standard protocol to evaluate and build complete databases of voice disorders. The target users of this system are speech and language therapists and ear nose and throat (ENT) clinicians. The application can be easily configured to cover the needs of both groups of professionals. The software has a user-friendly Windows style interface. The PC should be equipped with standard sound and video capture cards. Signals are captured using common transducers: a microphone, an electroglottograph and a fiberscope or telelaryngoscope. The clinical usefulness of the system is addressed in a comprehensive evaluation section.

  10. Java-based PACS and reporting system for nuclear medicine

    NASA Astrophysics Data System (ADS)

    Slomka, Piotr J.; Elliott, Edward; Driedger, Albert A.

    2000-05-01

    In medical imaging practice, images and reports often need be reviewed and edited from many locations. We have designed and implemented a Java-based Remote Viewing and Reporting System (JaRRViS) for a nuclear medicine department, which is deployed as a web service, at the fraction of the cost dedicated PACS systems. The system can be extended to other imaging modalities. JaRRViS interfaces to the clinical patient databases of imaging workstations. Specialized nuclear medicine applets support interactive displays of data such as 3-D gated SPECT with all the necessary options such as cine, filtering, dynamic lookup tables, and reorientation. The reporting module is implemented as a separate applet using Java Foundation Classes (JFC) Swing Editor Kit and allows composition of multimedia reports after selection and annotation of appropriate images. The reports are stored on the server in the HTML format. JaRRViS uses Java Servlets for the preparation and storage of final reports. The http links to the reports or to the patient's raw images with applets can be obtained from JaRRViS by any Hospital Information System (HIS) via standard queries. Such links can be sent via e-mail or included as text fields in any HIS database, providing direct access to the patient reports and images via standard web browsers.

  11. Therapeutic Uses of Music with Older Adults. Second Edition

    ERIC Educational Resources Information Center

    Clair, Alicia Ann; Memmott, Jenny

    2008-01-01

    In this comprehensively updated second edition, written by Alicia Ann Clair and Jenny Memmott the extraordinary benefits of music therapy for older adults are detailed. "Therapeutic Uses of Music with Older Adults" not only examines these benefits but also clarifies the reasons that music is beneficial. This important book shows both informal and…

  12. Schools without Drugs. What Works. Revised Edition.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC.

    This revised edition focuses on the prevention of drug use among school students, with increased attention to alcohol, tobacco, and steroids. The handbook, which begins with an introduction by Secretary of Education, Lauro F. Cavazos, provides new information about the effects of alcohol on young people; statistics on the harm it causes; and…

  13. A Resource Guide to Assistive Technology for Memory and Organization. Second Edition.

    ERIC Educational Resources Information Center

    McHale, Kathy; McHale, Sara, Ed.

    The second edition of this guide to assistive technology for memory and organization is intended for professionals working with people who have learning disabilities, attention deficit disorders, neurological conditions, and psychological problems. It contains expanded and new appendices as well as new information about free Internet resources,…

  14. Gas Metal Arc Welding and Flux-Cored Arc Welding. Teacher Edition. Second Edition.

    ERIC Educational Resources Information Center

    Fortney, Clarence; Gregory, Mike

    These instructional materials are designed to improve instruction in Gas Metal Arc Welding (GMAW) and Flux-Cored Arc Welding (FCAW). The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and…

  15. Professional School Counseling: A Handbook of Theories, Programs, and Practices. Third Edition

    ERIC Educational Resources Information Center

    Erford, Bradley T., Ed.

    2016-01-01

    "Professional School Counseling" is a comprehensive, single source for information about the critical issues facing school counselors today. This third edition of the Handbook integrates and expands on the changes brought about by the ASCA National Model. Revisions to each chapter reflect the influence of the model. Several new chapters…

  16. The Economy of Energy Conservation in Educational Facilities. A Report. Revised Edition.

    ERIC Educational Resources Information Center

    Educational Facilities Labs., Inc., New York, NY.

    This is an update of the 1973 edition of a guide for energy conservation in schools. This Educational Facilities Laboratories publication is an information source for teachers, school administrators, school maintenance personnel, school designers, or anyone interested in conserving energy in schools. Topics discussed include: (1) life-cycle…

  17. Health Activities Project (HAP), Trial Edition III.

    ERIC Educational Resources Information Center

    Buller, Dave; And Others

    Contained within this Health Activities Project (HAP) trial edition (set III) are a teacher information folio and numerous student activity folios which center around the idea that students in grades 5-8 can control their own health and safety. Each student folio is organized into an Overview, Health Background, Materials, Setting Up, and…

  18. Health Activities Project (HAP), Trial Edition II.

    ERIC Educational Resources Information Center

    Buller, Dave; And Others

    Contained within this Health Activities Project (HAP) trial edition (set II) are a teacher information folio and numerous student activity folios which center around the idea that students in grades 5-8 can control their own health and safety. Each student folio is organized into a Synopsis, Health Background, Materials, Setting Up, and Activities…

  19. National Profile of Community Colleges: Trends and Statistics, 4th Edition

    ERIC Educational Resources Information Center

    Phillippe, Kent A.; Sullivan, Leila Gonzalez

    2005-01-01

    This book offers a national view of trends and statistics related to today's community colleges. The new edition includes completely revised text as well as updates to charts and tables on topics such as enrollment, student outcomes, population, curriculum, faculty, workforce, and financial aid. Informative narrative introduces and provides…

  20. Psychology Teacher's Resource Book. First Course, Third Edition.

    ERIC Educational Resources Information Center

    Johnson, Margo, Ed.; Wertheimer, Michael, Ed.

    Now in its third edition, this book contains background materials and resources for teaching introductory high school psychology. There are 11 chapters. Textbooks appropriate for introductory courses are reviewed in the first chapter. Books of reading which are a potentially valuable source of information to both student and teacher are listed in…

  1. Multimedia Projects in Education: Designing, Producing, and Assessing, Third Edition

    ERIC Educational Resources Information Center

    Ivers, Karen S.; Barron, Ann E.

    2005-01-01

    Building on the materials in the two previous successful editions, this book features approximately 40% all new material and updates the previous information. The authors use the DDD-E model (Decide, Design, Develop--Evaluate) to show how to select and plan multimedia projects, use presentation and development tools, manage graphics, audio, and…

  2. Dental Hygiene Program Clinic Manual, Fall 1997. Fourth Edition.

    ERIC Educational Resources Information Center

    Errico, Mary; Cama, Christine; Pastoriza-Maldonado, Alida

    This is the fourth edition of the Clinic Manual for the Dental Hygiene Program at Eugenio Maria de Hostos Community College in the Bronx (New York). It contains general information, grading procedures, performance guides, and clinical forms related to the program. Section 1 provides an introduction to clinic philosophy, policies, goals and…

  3. Introduction to Technical Services. Seventh Edition. Library and Information Science Text Series.

    ERIC Educational Resources Information Center

    Evans, G. Edward; Intner, Sheila S.; Weihs, Jean

    This updated edition covers all aspects of library technical services--from acquisitions to managing the cataloging department--with new emphasis on automation as it affects technical services work and those skills that can be developed through work experience or classroom instruction. Part One, General Background, consists of four chapters that…

  4. Tests: A Comprehensive Reference for Assessments in Psychology, Education, and Business. Fourth Edition.

    ERIC Educational Resources Information Center

    Maddox, Taddy, Ed.

    The fourth edition of this reference guide contains information on thousands of assessment instruments published by 221 publishers and available for use by psychologists, educators, and human resources personnel. The assessments described are organized according to a system of primary classification and cross referencing intended to make the…

  5. Requirements for Certification for Elementary Schools, Secondary Schools, Junior Colleges: Teachers, Counselors, Librarians, Administrators. Forty-Ninth Edition, 1984-85.

    ERIC Educational Resources Information Center

    Burks, Mary P.

    This edition of "Requirements for Certification" updates pertinent information on certification requirements for teachers, administrators, librarians, counselors, and other school personnel in each state in the United States. Outlines are provided of recommendations on certification by the following regional and national associations: Middle…

  6. Hematopathology, 2nd Edition | Center for Cancer Research

    Cancer.gov

    The world's leading reference in hematopathology returns with this completely updated second edition. Authored by international experts in the field, it covers a broad range of hematologic disorders -- both benign and malignant -- with information on the pathogenesis, clinical and pathologic diagnosis, and treatment for each. Comprehensive in scope, it's a must-have resource

  7. Part C Updates: 9th Edition

    ERIC Educational Resources Information Center

    Danaher, Joan; Goode, Sue; Lazara, Alex

    2007-01-01

    "Part C Updates" is a compilation of information on various aspects of the Early Intervention Program for Infants and Toddlers with Disabilities (Part C) of the Individuals with Disabilities Education Act (IDEA). This is the ninth volume in a series of compilations, which included two editions of Part H Updates, the former name of the…

  8. 76 FR 25710 - Comment Request for Information Collection for Employment and Training (ET) Handbook 336, 18th...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    ...) Handbook 336, 18th Edition: ``Unemployment Insurance (UI) State Quality Service Plan Planning (SQSP) and... proposed extension to ET Handbook 336, 18th Edition: ``Unemployment Insurance (UI) State Quality Service.... ADDRESSES: Submit written comments to the Employment and Training Administration, Office of Unemployment...

  9. Transition Toolkit 3.0: Meeting the Educational Needs of Youth Exposed to the Juvenile Justice System. Third Edition

    ERIC Educational Resources Information Center

    Clark, Heather Griller; Mathur, Sarup; Brock, Leslie; O'Cummings, Mindee; Milligan, DeAngela

    2016-01-01

    The third edition of the National Technical Assistance Center for the Education of Neglected or Delinquent Children and Youth's (NDTAC's) "Transition Toolkit" provides updated information on existing policies, practices, strategies, and resources for transition that build on field experience and research. The "Toolkit" offers…

  10. Adviser's Manual of Federal Regulations Affecting Foreign Students and Scholars. 1993 Edition.

    ERIC Educational Resources Information Center

    Bedrosian, Alex, Ed.

    This manual constitutes a reference source on federal regulations for all those concerned with international educational exchange. This year's edition adds to the usual range of topics with more detailed information on immigrant status and related areas. Thirteen sections treat the following topics: (1) introduction to immigration law; (2) the…

  11. Estimating for building and civil engineering works. Eighth edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geddes, S.; Chrystal-Smith, G.; Jolly, P.

    1986-01-01

    This new edition of Spence Geddes classic work has been revised and updated to take into account changes since the seventh edition of 1981. It remains a standard reference work which combines a step-by-step guide to the preparation of estimates from tendering stage with a fully representative selection of labour and material constants and worked examples of actual calculations. The estimating information is tabulated as hour constants which are unaffected by fluctuations of labour and plant hire costs. Two new sections have been included. In previous editions dayworks received a few brief notes only, but as so much daywork ismore » carried out on both large and small contracts, and as it can frequently give rise to misunderstanding, a fuller explanation was thought helpful. Landscaping, once the province of the gardener is now very often an integral part of building and civil engineering contracts and a new chapter has therefore been added. With these additions and the careful updating, the book is an indispensable source of reference for the estimator and a valuable source of information for architects, engineers and surveyors.« less

  12. Producing Videotape Programs for Computer Training: An Example with AMA/NET

    PubMed Central

    Novey, Donald W.

    1990-01-01

    To facilitate user proficiency with AMA/Net, an 80-minute training videotape has been produced. The production was designed to use videotape's advantages, where information and emotion are combined; and to accommodate its chief disadvantage, lack of resolution for fine text, with close-ups and graphics. Content of the videotape was conceived, out-lined, demonstrated with simultaneous text capture, edited into script form, narration added, and scripts marked for videotaping and narrating. Videotaping was performed with actual keyboard sounds for realism. The recording was divided into four areas: office mock-up, keyboard close-ups, scan-conversion and screen close-ups. Once the footage was recorded, it was logged and rough-edited. Care was taken to balance the pace of the program with visual stimulation and amount of narration. The final edit was performed as a culmination of all scripts, video materials and rough edit, with graphics and steady change of visual information offsetting the static nature of the screen display. Carefully planned video programs can be a useful and economical adjunct in the training process for online services.

  13. Producing Videotape Programs for Computer Training: An Example with AMA/NET

    PubMed Central

    Novey, Donald W.

    1990-01-01

    To facilitate user proficiency with AMA/Net, an 80-minute training videotape has been produced. The production was designed to use videotape's advantages, where information and emotion are combined; and to accommodate its chief disadvantage, lack of resolution for fine text, with close-ups and graphics. Content of the videotape was conceived, outlined, demonstrated with simultaneous text capture, edited into script form, narration added, and scripts marked for videotaping and narrating. Videotaping was performed with actual keyboard sounds for realism. The recording was divided into four areas: office mock-up, keyboard close-ups, scan-conversion and screen close-ups. Once the footage was recorded, it was logged and rough-edited. Care was taken to balance the pace of the program with visual stimulation and amount of narration. The final edit was performed as a culmination of all scripts, video materials and rough edit, with graphics and steady change of visual information offsetting the static nature of the screen display. Carefully planned video programs can be a useful and economical adjunct in the training process for online services.

  14. The IASLC Lung Cancer Staging Project: Background Data and Proposals for the Classification of Lung Cancer with Separate Tumor Nodules in the Forthcoming Eighth Edition of the TNM Classification for Lung Cancer.

    PubMed

    Detterbeck, Frank C; Bolejack, Vanessa; Arenberg, Douglas A; Crowley, John; Donington, Jessica S; Franklin, Wilbur A; Girard, Nicolas; Marom, Edith M; Mazzone, Peter J; Nicholson, Andrew G; Rusch, Valerie W; Tanoue, Lynn T; Travis, William D; Asamura, Hisao; Rami-Porta, Ramón

    2016-05-01

    Separate tumor nodules with the same histologic appearance occur in the lungs in a small proportion of patients with primary lung cancer. This article addresses how such tumors can be classified to inform the eighth edition of the anatomic classification of lung cancer. Separate tumor nodules should be distinguished from second primary lung cancer, multifocal ground glass/lepidic tumors, and pneumonic-type lung cancer, which are addressed in separate analyses. Survival of patients with separate tumor nodules in the International Association for the Study of Lung Cancer database were analyzed. This was compared with a systematic literature review. Survival of clinically staged patients decreased according to the location of the separate tumor nodule relative to the index tumor (same lobe > same side > other side) in N0 and N-any cohorts (all M0 except possible other-side nodules). However, there was also a decrease in the proportion of patients resected; among only surgically resected or among nonresected patients no survival differences were noted. There were no survival differences between patients with same-lobe nodules and those with other T3 tumors, between patients with same-side nodules and those with T4 tumors, and patients with other-side nodules and those with other M1a tumors. The data correlated with those identified in a literature review. Tumors with same-lobe separate tumor nodules (with the same histologic appearance) are recommended to be classified as T3, same-side nodules as T4, and other-side nodules as M1a. Thus, there is no recommended change between the seventh and eighth edition of the TNM classification of lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  15. Protein synthesis editing by a DNA aptamer.

    PubMed Central

    Hale, S P; Schimmel, P

    1996-01-01

    Potential errors in decoding genetic information are corrected by tRNA-dependent amino acid recognition processes manifested through editing reactions. One example is the rejection of difficult-to-discriminate misactivated amino acids by tRNA synthetases through hydrolytic reactions. Although several crystal structures of tRNA synthetases and synthetase-tRNA complexes exist, none of them have provided insight into the editing reactions. Other work suggested that editing required active amino acid acceptor hydroxyl groups at the 3' end of a tRNA effector. We describe here the isolation of a DNA aptamer that specifically induced hydrolysis of a misactivated amino acid bound to a tRNA synthetase. The aptamer had no effect on the stability of the correctly activated amino acid and was almost as efficient as the tRNA for inducing editing activity. The aptamer has no sequence similarity to that of the tRNA effector and cannot be folded into a tRNA-like structure. These and additional data show that active acceptor hydroxyl groups in a tRNA effector and a tRNA-like structure are not essential for editing. Thus, specific bases in a nucleic acid effector trigger the editing response. Images Fig. 3 Fig. 4 PMID:8610114

  16. Publication Manual of the American Psychological Association--5th edition: a review of additions and changes in style requirements.

    PubMed

    Russell, Cynthia L; Aud, Myra A

    2002-01-01

    The purpose of this article is to highlight the substantive changes and enhancements between the 4th edition and new 5th edition of the so that modifications and enhancements are more easily incorporated into the reader's writing and editing practice. The 4th and new 5th editions of the were compared and substantive changes are presented. The following text style requirements are addressed: (a) use of parentheses to enclose statistical values, (b) use of italics, and (c) presentation of statistical data. Reference citation style changes include: (a) use of italics instead of underlining, (b) use of et al., and (c) use of the hanging indent. With the explosion of electronic media use, guidelines for documenting these sources are reviewed. Appropriate use of adverbs and research subject descriptors, submission of manuscripts on disks or files, responsibilities of corresponding authors, and converting the dissertation into a journal article are addressed. This information should assist the reader to quickly and accurately focus upon the enhancements and changes in the recently released 5th edition. Hopefully these changes will easily be incorporated into the readers' editing responsibilities, manuscripts, and subsequent publications.

  17. TRedD—A database for tandem repeats over the edit distance

    PubMed Central

    Sokol, Dina; Atagun, Firat

    2010-01-01

    A ‘tandem repeat’ in DNA is a sequence of two or more contiguous, approximate copies of a pattern of nucleotides. Tandem repeats are common in the genomes of both eukaryotic and prokaryotic organisms. They are significant markers for human identity testing, disease diagnosis, sequence homology and population studies. In this article, we describe a new database, TRedD, which contains the tandem repeats found in the human genome. The database is publicly available online, and the software for locating the repeats is also freely available. The definition of tandem repeats used by TRedD is a new and innovative definition based upon the concept of ‘evolutive tandem repeats’. In addition, we have developed a tool, called TandemGraph, to graphically depict the repeats occurring in a sequence. This tool can be coupled with any repeat finding software, and it should greatly facilitate analysis of results. Database URL: http://tandem.sci.brooklyn.cuny.edu/ PMID:20624712

  18. Biological knowledge bases using Wikis: combining the flexibility of Wikis with the structure of databases.

    PubMed

    Brohée, Sylvain; Barriot, Roland; Moreau, Yves

    2010-09-01

    In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.

  19. Cpf1-Database: web-based genome-wide guide RNA library design for gene knockout screens using CRISPR-Cpf1.

    PubMed

    Park, Jeongbin; Bae, Sangsu

    2018-03-15

    Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.

  20. GIS Database and Google Map of the Population at Risk of Cholangiocarcinoma in Mueang Yang District, Nakhon Ratchasima Province of Thailand.

    PubMed

    Kaewpitoon, Soraya J; Rujirakul, Ratana; Joosiri, Apinya; Jantakate, Sirinun; Sangkudloa, Amnat; Kaewthani, Sarochinee; Chimplee, Kanokporn; Khemplila, Kritsakorn; Kaewpitoon, Natthawut

    2016-01-01

    Cholangiocarcinoma (CCA) is a serious problem in Thailand, particularly in the northeastern and northern regions. Database of population at risk are need required for monitoring, surveillance, home health care, and home visit. Therefore, this study aimed to develop a geographic information system (GIS) database and Google map of the population at risk of CCA in Mueang Yang district, Nakhon Ratchasima province, northeastern Thailand during June to October 2015. Populations at risk were screened using the Korat CCA verbal screening test (KCVST). Software included Microsoft Excel, ArcGIS, and Google Maps. The secondary data included the point of villages, sub-district boundaries, district boundaries, point of hospital in Mueang Yang district, used for created the spatial databese. The populations at risk for CCA and opisthorchiasis were used to create an arttribute database. Data were tranfered to WGS84 UTM ZONE 48. After the conversion, all of the data were imported into Google Earth using online web pages www.earthpoint.us. Some 222 from a 4,800 population at risk for CCA constituted a high risk group. Geo-visual display available at following www.google.com/maps/d/u/0/ edit?mid=zPxtcHv_iDLo.kvPpxl5mAs90 and hl=th. Geo-visual display 5 layers including: layer 1, village location and number of the population at risk for CCA; layer 2, sub-district health promotion hospital in Mueang Yang district and number of opisthorchiasis; layer 3, sub-district district and the number of population at risk for CCA; layer 4, district hospital and the number of population at risk for CCA and number of opisthorchiasis; and layer 5, district and the number of population at risk for CCA and number of opisthorchiasis. This GIS database and Google map production process is suitable for further monitoring, surveillance, and home health care for CCA sufferers.

  1. A Proposed Resident's Operative Case Tracking and Evaluation System.

    PubMed

    Sehli, Deema N; Esene, Ignatius N; Baeesa, Saleh S

    2016-03-01

    Neurosurgery program trainers are continuously searching for new methods to evaluate trainees' competency besides number of cases and training duration. Recently, efforts are made on the development of reliable methods to teach competency and valid methods to measure teaching efficacy. Herein, we propose the "Resident's Operative Case Tracking and Evaluation System" (ROCTES) for the assessment and monitoring of the resident's performance quality during each procedure. We developed a data-based website and smartphone application for neurosurgical attending physicians, residents, and resident review committees in our accredited neurosurgical institutions. ROCTES runs through five steps: Login (Resident), Case Entry, Login (Attending Physician), Case Approval and Evaluation, and Report. The Resident enters each case record under "Case Entry" field and can "save," "edit," or "submit" the case data to the Attending Physician. The latter from the attending physician login profile will be able to "approve and evaluate" the resident's "knowledge," "skills," and "attitude" ranking from 1 to 15 for that particular case; add his comments and then "save," "edit," or "submit" the data, which can be viewed by users as a "report." Program Directors can also "login" to monitor the resident's progress. The implementation of this communication tool should enable the filtering and retrieval of information needed for the better assessment and monitoring of residents' exposure to variety of cases in each training center. This proposed evaluation system will provide a transparent assessment for residency training programs and should convert trainees into competent neurosurgeons. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. HIV Molecular Immunology 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusim, Karina; Korber, Bette Tina; Brander, Christian

    The scope and purpose of the HIV molecular immunology database: HIV Molecular Immunology is a companion volume to HIV Sequence Compendium. This publication, the 2015 edition, is the PDF version of the web-based HIV Immunology Database (http://www.hiv.lanl.gov/ content/immunology/). The web interface for this relational database has many search options, as well as interactive tools to help immunologists design reagents and interpret their results. In the HIV Immunology Database, HIV-specific B-cell and T-cell responses are summarized and annotated. Immunological responses are divided into three parts, CTL, T helper, and antibody. Within these parts, defined epitopes are organized by protein and bindingmore » sites within each protein, moving from left to right through the coding regions spanning the HIV genome. We include human responses to natural HIV infections, as well as vaccine studies in a range of animal models and human trials. Responses that are not specifically defined, such as responses to whole proteins or monoclonal antibody responses to discontinuous epitopes, are summarized at the end of each protein section. Studies describing general HIV responses to the virus, but not to any specific protein, are included at the end of each part. The annotation includes information such as cross-reactivity, escape mutations, antibody sequence, TCR usage, functional domains that overlap with an epitope, immune response associations with rates of progression and therapy, and how specific epitopes were experimentally defined. Basic information such as HLA specificities for T-cell epitopes, isotypes of monoclonal antibodies, and epitope sequences are included whenever possible. All studies that we can find that incorporate the use of a specific monoclonal antibody are included in the entry for that antibody. A single T-cell epitope can have multiple entries, generally one entry per study. Finally, maps of all defined linear epitopes relative to the HXB2 reference proteins are provided. Alignments of CTL, helper T-cell, and antibody epitopes are available through the search interface on our web site at http:// www.hiv.lanl.gov/content/immunology.« less

  3. Positive correlation between ADAR expression and its targets suggests a complex regulation mediated by RNA editing in the human brain

    PubMed Central

    Liscovitch, Noa; Bazak, Lily; Levanon, Erez Y; Chechik, Gal

    2014-01-01

    A-to-I RNA editing by adenosine deaminases acting on RNA is a post-transcriptional modification that is crucial for normal life and development in vertebrates. RNA editing has been shown to be very abundant in the human transcriptome, specifically at the primate-specific Alu elements. The functional role of this wide-spread effect is still not clear; it is believed that editing of transcripts is a mechanism for their down-regulation via processes such as nuclear retention or RNA degradation. Here we combine 2 neural gene expression datasets with genome-level editing information to examine the relation between the expression of ADAR genes with the expression of their target genes. Specifically, we computed the spatial correlation across structures of post-mortem human brains between ADAR and a large set of targets that were found to be edited in their Alu repeats. Surprisingly, we found that a large fraction of the edited genes are positively correlated with ADAR, opposing the assumption that editing would reduce expression. When considering the correlations between ADAR and its targets over development, 2 gene subsets emerge, positively correlated and negatively correlated with ADAR expression. Specifically, in embryonic time points, ADAR is positively correlated with many genes related to RNA processing and regulation of gene expression. These findings imply that the suggested mechanism of regulation of expression by editing is probably not a global one; ADAR expression does not have a genome wide effect reducing the expression of editing targets. It is possible, however, that RNA editing by ADAR in non-coding regions of the gene might be a part of a more complex expression regulation mechanism. PMID:25692240

  4. Positive correlation between ADAR expression and its targets suggests a complex regulation mediated by RNA editing in the human brain.

    PubMed

    Liscovitch, Noa; Bazak, Lily; Levanon, Erez Y; Chechik, Gal

    2014-01-01

    A-to-I RNA editing by adenosine deaminases acting on RNA is a post-transcriptional modification that is crucial for normal life and development in vertebrates. RNA editing has been shown to be very abundant in the human transcriptome, specifically at the primate-specific Alu elements. The functional role of this wide-spread effect is still not clear; it is believed that editing of transcripts is a mechanism for their down-regulation via processes such as nuclear retention or RNA degradation. Here we combine 2 neural gene expression datasets with genome-level editing information to examine the relation between the expression of ADAR genes with the expression of their target genes. Specifically, we computed the spatial correlation across structures of post-mortem human brains between ADAR and a large set of targets that were found to be edited in their Alu repeats. Surprisingly, we found that a large fraction of the edited genes are positively correlated with ADAR, opposing the assumption that editing would reduce expression. When considering the correlations between ADAR and its targets over development, 2 gene subsets emerge, positively correlated and negatively correlated with ADAR expression. Specifically, in embryonic time points, ADAR is positively correlated with many genes related to RNA processing and regulation of gene expression. These findings imply that the suggested mechanism of regulation of expression by editing is probably not a global one; ADAR expression does not have a genome wide effect reducing the expression of editing targets. It is possible, however, that RNA editing by ADAR in non-coding regions of the gene might be a part of a more complex expression regulation mechanism.

  5. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process tens of thousands of data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. During the upload process the owner has the option of keeping the contribution private so it can be viewed in the context of other data sets and visualized using the suite of MagIC plotting tools. Alternatively, the new data can be password protected and shared with a group of users at the contributor's discretion. Once they are published and the owner is comfortable making the upload publicly accessible, the MagIC Editing Committee reviews the contribution for adherence to the MagIC data model and conventions to ensure a high level of data integrity.

  6. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_Aqua-FM3_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  7. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_FM1+FM4_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  8. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF ( CER_ES9_Aqua-FM4_Edition1-CV)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-03-29] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  9. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF ( CER_ES9_Terra-FM1_Edition1-CV)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2006-09-30] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  10. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_TRMM-PFM_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=1998-08-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  11. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_Aqua-FM4_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-03-29] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  12. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_PFM+FM2_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  13. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_Terra-FM1_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  14. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CERES:CER_ES9_PFM+FM1_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  15. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_Aqua-FM4_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-03-29] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  16. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_PFM+FM1_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  17. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF ( CER_ES9_Aqua-FM3_Edition1-CV)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2006-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  18. CERES ERBE-like Monthly Regional Averages (ES-9) in HDF (CER_ES9_Terra-FM2_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ERBE-like Monthly Regional Averages (ES-9) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The ES-9 is also produced for combinations of scanner instruments. All instantaneous shortwave and longwave fluxes at the Top-of-the-Atmosphere (TOA) from the CERES ES-8 product for a month are sorted by 2.5-degree spatial regions, by day number, and by the local hour of observation. The mean of the instantaneous fluxes for a given region-day-hour bin is determined and recorded on the ES-9 along with other flux statistics and scene information. For each region, the daily average flux is estimated from an algorithm that uses the available hourly data, scene identification data, and diurnal models. This algorithm is 'like' the algorithm used for the Earth Radiation Budget Experiment (ERBE). The ES-9 also contains hourly average fluxes for the month and an overall monthly average for each region. These average fluxes are given for both clear-sky and total-sky scenes. The following CERES ES9 data sets are currently available: CER_ES9_FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition1 CER_ES9_PFM+FM1+FM2_Edition2 CER_ES9_PFM+FM1_Edition1 CER_ES9_PFM+FM2_Edition1 CER_ES9_PFM+FM1_Edition2 CER_ES9_PFM+FM2_Edition2 CER_ES9_TRMM-PFM_Edition1 CER_ES9_TRMM-PFM_Edition2 CER_ES9_Terra-FM1_Edition1 CER_ES9_Terra-FM2_Edition1 CER_ES9_FM1+FM2_Edition2 CER_ES9_Terra-FM1_Edition2 CER_ES9_Terra-FM2_Edition2 CER_ES9_Aqua-FM3_Edition1 CER_ES9_Aqua-FM4_Edition1 CER_ES9_FM1+FM2+FM3+FM4_Edition1 CER_ES9_Aqua-FM3_Edition2 CER_ES9_Aqua-FM4_Edition2 CER_ES9_FM1+FM3_Edition2 CER_ES9_FM1+FM4_Edition2 CER_ES9_Aqua-FM3_Edition1-CV CER_ES9_Aqua-FM4_Edition1-CV CER_ES9_Terra-FM1_Edition1-CV CER_ES9_Terra-FM2_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=2.5 degree; Longitude_Resolution=2.5 degree; Horizontal_Resolution_Range=250 km - < 500 km or approximately 2.5 degrees - < 5.0 degrees; Temporal_Resolution=hourly, daily, monthly; Temporal_Resolution_Range=Hourly - < Daily, Daily - < Weekly, Monthly - < Annual].

  19. A to I editing in disease is not fake news.

    PubMed

    Bajad, Prajakta; Jantsch, Michael F; Keegan, Liam; O'Connell, Mary

    2017-09-02

    Adenosine deaminases acting on RNA (ADARs) are zinc-containing enzymes that deaminate adenosine bases to inosines within dsRNA regions in transcripts. In short, structured dsRNA hairpins individual adenosine bases may be targeted specifically and edited with up to one hundred percent efficiency, leading to the production of alternative protein variants. However, the majority of editing events occur within longer stretches of dsRNA formed by pairing of repetitive sequences. Here, many different adenosine bases are potential targets but editing efficiency is usually much lower. Recent work shows that ADAR-mediated RNA editing is also required to prevent aberrant activation of antiviral innate immune sensors that detect viral dsRNA in the cytoplasm. Missense mutations in the ADAR1 RNA editing enzyme cause a fatal auto-inflammatory disease, Aicardi-Goutières syndrome (AGS) in affected children. In addition RNA editing by ADARs has been observed to increase in many cancers and also can contribute to vascular disease. Thus the role of RNA editing in the progression of various diseases can no longer be ignored. The ability of ADARs to alter the sequence of RNAs has also been used to artificially target model RNAs in vitro and in cells for RNA editing. Potentially this approach may be used to repair genetic defects and to alter genetic information at the RNA level. In this review we focus on the role of ADARs in disease development and progression and on their potential use to artificially modify RNAs in a targeted manner.

  20. A Survey of Serious Aircraft Accidents Involving Fatigue Fracture. Volume 2. Rotary-Wing Aircraft (Etude sur des Accidents Importants d’Avions du aux Effets des Fractures de Fatigue. Volume 2. Effets sur des Helicopteres).

    DTIC Science & Technology

    1983-04-01

    Convention on International Civil Aviation, Second Edition , March 1966. 5. WORLD AIRLINE ACCIDENT SUMMARY. Civil Aviation Authority, (Great Britain...people who either provided information, or who suggested other sources of information for the current edition of this survey. E.M.R. Alexander Civil...Waverley, New Zealand. F-28C Tail rotor drive shaft. Fatigue strength reduc- ed by softened condition & surface decarbur- isation. AISA 4130 steel. Ref: NZ

  1. Inventory America: Residential Buildings. Student Workbook, Pilot Edition; Teacher & Project Supervisor Manual, Pilot Edition.

    ERIC Educational Resources Information Center

    Haupt, Richard; Pomeroy, Robert W., Ed.

    Designed as a special studies unit for secondary school students or for adult volunteers, this student workbook and teacher's guide explains how to produce an accurate record of memorable neighborhood houses. The activities in the workbook include observation and information gathering. The workbook sections describe various steps in the process…

  2. 78 FR 56715 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... minutes, automatically generate the SPL document (a few formatting edits may have to be made). Based on... render it as intended in SPL. The comment said that most users need to apply applicable formatting to..., including MS Word (both editable and hard- formatted), faxes, texts, in emails, or other scanned documents...

  3. Life beyond the Classroom: Transition Strategies for Young People with Disabilities, Fourth Edition

    ERIC Educational Resources Information Center

    Wehman, Paul

    2006-01-01

    Just in time for the implementation of new IDEA regulations, this fourth edition of a landmark text brings together the most up-to-date, comprehensive information on facilitating transitions for young people with mild, moderate, or severe disabilities. Teaming with the best-known researchers in the fields of employment, transition, postsecondary…

  4. Algebra Word Problem Solving Approaches in a Chemistry Context: Equation Worked Examples versus Text Editing

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Yeung, Alexander Seeshing

    2013-01-01

    Text editing directs students' attention to the problem structure as they classify whether the texts of word problems contain sufficient, missing or irrelevant information for working out a solution. Equation worked examples emphasize the formation of a coherent problem structure to generate a solution. Its focus is on the construction of three…

  5. Staff Development: A Practical Guide. Third Edition.

    ERIC Educational Resources Information Center

    Avery, Elizabeth Fuseler, Ed.; Dahlin, Terry, Ed.; Carver, Deborah A., Ed.

    In this new, expanded edition step-by-step guidelines are provided for customizing a staff development program that is both proactive and goal-oriented. Drawing on the advice of 37 top experts with a variety of skill sets, this book presents information on how to assess a library's needs and set training goals, budget appropriately, develop a set…

  6. Digest of Education Statistics, 2010. NCES 2011-015

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Dillow, Sally A.

    2011-01-01

    The 2010 edition of the "Digest of Education Statistics" is the 46th in a series of publications initiated in 1962. The "Digest" has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field…

  7. Unkindest Cuts? Some Effects of Picture Editing on Recall of Television News Information.

    ERIC Educational Resources Information Center

    Davies, Maire Messenger; And Others

    1985-01-01

    This study examined the effects of pictorial changes on recall of spoken text in a television news broadcast viewed by adolescents and adults. Adult recall of spoken text was impaired by mid-sentence picture editing while adolescents' recall was enhanced so long as accompanying pictures were relevant to verbal text. (MBR)

  8. Insider's Guide to Graduate Programs in Clinical and Counseling Psychology. 2006/2007 Edition

    ERIC Educational Resources Information Center

    Mayne, Tracy J.; Norcross, John C.; Sayette, Michael A.

    2006-01-01

    Now in its 2006-2007 edition, this perennial bestseller is the resource students count on for the most current information on applying to doctoral programs in clinical or counseling psychology. The Insider's Guide presents up-to-date facts on 300 accredited programs in the United States and Canada. Each program's profile includes admissions…

  9. Journal of Teaching in Marriage and Family: Innovations in Family Science Education. Archival Print Edition, 2001.

    ERIC Educational Resources Information Center

    Gentry, Deborah, Ed.

    2002-01-01

    This document comprises the archival print edition of the first volume of a quarterly electronic journal providing a forum for information on leading-edge teaching innovations and research findings in the field of family science. The foci of the journal are the family science curriculum, instruction issues, evaluation and assessment, and…

  10. 1001 Best Internet Sites for Educators. 2nd Edition.

    ERIC Educational Resources Information Center

    Treadwell, Mark

    This second edition of a resource designed to help teachers find relevant information on the Internet for both themselves and their students, provides concise reviews of more than 1,000 Web sites sorted by subject area. Each site is evaluated with one to five stars for content, presentation and grade level. Easy-to-follow explanations are provided…

  11. Guided Research in Middle School: Mystery in the Media Center. Second Edition

    ERIC Educational Resources Information Center

    Harrington, LaDawna

    2011-01-01

    A little imagination, a little drama, a little mystery. Using the guided inquiry model in this updated, second edition, students become detectives at Information Headquarters. They solve a mystery-and enhance their problem-solving and literacy skills. Middle school is a crucial time in the development of problem-solving, critical-thinking, and…

  12. Combined Edition of Family Planning Library Manual and Family Planning Classification.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    This edition combines two previous publications of the Katharine Dexter McCormick Library into one volume: the Family Planning Library Manual, a guide for starting a family planning and population library or information center, and the Family Planning Classification, a coding system for organizing book and non-book materials so that they can be…

  13. Developing Cross-Cultural Competence: A Guide for Working with Children and Their Families, Third Edition

    ERIC Educational Resources Information Center

    Lynch, Eleanor W.; Hanson, Marci J.

    2004-01-01

    The third edition of this bestselling text brings together detailed, accurate information on working with families and children with disabilities from specific cultural, ethnic, and language groups. Filled with timely new additions such as a chapter on South Asian roots, open-ended case studies on ethical dilemmas, and an expanded discussion on…

  14. Digest of Education Statistics, 2009. NCES 2010-013

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Dillow, Sally A.

    2010-01-01

    The 2009 edition of the "Digest of Education Statistics" is the 45th in a series of publications initiated in 1962. The "Digest" has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field…

  15. Digest of Education Statistics 2013. NCES 2015-011

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Dillow, Sally A.

    2015-01-01

    The 2013 edition of the "Digest of Education Statistics" is the 49th in a series of publications initiated in 1962. The Digest has been issued annually except for combined editions for the years 1977-78, 1983-84, and 1985-86. Its primary purpose is to provide a compilation of statistical information covering the broad field of American…

  16. 36 CFR 1235.4 - What publications are incorporated by reference in this part?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-track magnetic tape cartridges—DLT 5 format, First Edition, December 15, 1999, IBR approved for § 1235...-track magnetic tape cartridges—DLT 6 format, First Edition, May 15, 2000, IBR approved for § 1235.46. (d... Magnetic Tape for Information Interchange (1600 CPI, PE), 1986, IBR approved for § 1235.46. (2) [Reserved...

  17. Requirements for Certification [of] Teachers, Counselors, Librarians, Administrators for Elementary Schools, Secondary Schools, Junior Colleges. Forty-eighth Edition, 1983-84.

    ERIC Educational Resources Information Center

    Woellner, Elizabeth H.

    This edition of "Requirements for Certification" updates pertinent information on certification requirements for teachers, administrators, librarians, counselors, and other school personnel in each state in the United States. Outlines are provided of recommendations on certification by regional and national associations, and sources of information…

  18. Water Films, 2nd Edition, 1965-1974.

    ERIC Educational Resources Information Center

    Canadian National Committee, Ottawa (Ontario).

    This is an annotated listing of 455 films on hydrology, as well as on many allied fields. This second edition, much more comprehensive than the first, is not intended to serve as a critical evaluation, but should be used solely as a source of information as to what films are available. All films are listed alphabetically according to their titles…

  19. Critical Thinking about Research: Psychology and Related Fields. Second Edition

    ERIC Educational Resources Information Center

    Meltzoff, Julian; Cooper, Harris

    2017-01-01

    Could the research you read be fundamentally flawed? Could critical defects in methodology slip by you undetected? To become informed consumers of research, students need to thoughtfully evaluate the research they read rather than accept it without question. This second edition of a classic text gives students the tools they need to apply critical…

  20. The Education Almanac, 1987-1988. Facts and Figures about Our Nation's System of Education. Third Edition.

    ERIC Educational Resources Information Center

    Goodman, Leroy V., Ed.

    This is the third edition of the Education Almanac, an assemblage of statistics, facts, commentary, and basic background information about the conduct of schools in the United States. Features of this variegated volume include an introductory section on "Education's Newsiest Developments," followed by some vital educational statistics, a set of…

Top