Food Composition Database Format and Structure: A User Focused Approach
Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine
2015-01-01
This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836
NASA Technical Reports Server (NTRS)
Singh, M.
1999-01-01
Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
NASA Astrophysics Data System (ADS)
Seko, Atsuto; Hayashi, Hiroyuki; Kashima, Hisashi; Tanaka, Isao
2018-01-01
Chemically relevant compositions (CRCs) and atomic arrangements of inorganic compounds have been collected as inorganic crystal structure databases. Machine learning is a unique approach to search for currently unknown CRCs from vast candidates. Herein we propose matrix- and tensor-based recommender system approaches to predict currently unknown CRCs from database entries of CRCs. Firstly, the performance of the recommender system approaches to discover currently unknown CRCs is examined. A Tucker decomposition recommender system shows the best discovery rate of CRCs as the majority of the top 100 recommended ternary and quaternary compositions correspond to CRCs. Secondly, systematic density functional theory (DFT) calculations are performed to investigate the phase stability of the recommended compositions. The phase stability of the 27 compositions reveals that 23 currently unknown compounds are newly found to be stable. These results indicate that the recommender system has great potential to accelerate the discovery of new compounds.
A structured vocabulary for indexing dietary supplements in databases in the United States
USDA-ARS?s Scientific Manuscript database
Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing ...
Readiness of food composition databases and food component analysis systems for nutrigenomics
USDA-ARS?s Scientific Manuscript database
The study objective was to discuss the international implications of using nutrigenomics as the basis for individualized health promotion and chronic disease prevention and the challenges it presents to existing nutrient databases and nutrient analysis systems. Definitions and research methods of nu...
GAS CHROMATOGRAPHIC RETENTION PARAMETERS DATABASE FOR REFRIGERANT MIXTURE COMPOSITION MANAGEMENT
Composition management of mixed refrigerant systems is a challenging problem in the laboratory, manufacturing facilities, and large refrigeration machinery. Ths issue of composition management is especially critical for the maintenance of machinery that utilizes zeotropic mixture...
Integral nuclear data validation using experimental spent nuclear fuel compositions
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco; ...
2017-07-19
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Integral nuclear data validation using experimental spent nuclear fuel compositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco
Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less
Sivakumaran, Subathira; Huffman, Lee; Sivakumaran, Sivalingam
2018-01-01
A country-specific food composition databases is useful for assessing nutrient intake reliably in national nutrition surveys, research studies and clinical practice. The New Zealand Food Composition Database (NZFCDB) programme seeks to maintain relevant and up-to-date food records that reflect the composition of foods commonly consumed in New Zealand following Food Agricultural Organisation of the United Nations/International Network of Food Data Systems (FAO/INFOODS) guidelines. Food composition data (FCD) of up to 87 core components for approximately 600 foods have been added to NZFCDB since 2010. These foods include those identified as providing key nutrients in a 2008/09 New Zealand Adult Nutrition Survey. Nutrient data obtained by analysis of composite samples or are calculated from analytical data. Currently >2500 foods in 22 food groups are freely available in various NZFCDB output products on the website: www.foodcomposition.co.nz. NZFCDB is the main source of FCD for estimating nutrient intake in New Zealand nutrition surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cardellini, Carlo; Frigeri, Alessandro; Lehnert, Kerstin; Ash, Jason; McCormick, Brendan; Chiodini, Giovanni; Fischer, Tobias; Cottrell, Elizabeth
2015-04-01
The release of volatiles from the Earth's interior takes place in both volcanic and non-volcanic areas of the planet. The comprehension of such complex process and the improvement of the current estimates of global carbon emissions, will greatly benefit from the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing interoperability between three data systems that will make their data accessible via the DECADE portal: (1) the Smithsonian Institutionian's Global Volcanism Program database (VOTW) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. The DECADE web portal will create a powerful search engine of these databases from a single entry point and will return comprehensive multi-component datasets. A user will be able, for example, to obtain data relating to compositions of emitted gases, compositions and age of the erupted products and coincident activity, of a specific volcano. This level of capability requires a complete synergy between the databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process allowing to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.
NASA Astrophysics Data System (ADS)
Chapman, James B.; Kapp, Paul
2017-11-01
A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.
Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul
2017-03-23
eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA).
Thermodynamic database for the Co-Pr system.
Zhou, S H; Kramer, M J; Meng, F Q; McCallum, R W; Ott, R T
2016-03-01
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.
Thermodynamic database for the Co-Pr system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S. H.; Kramer, M. J.; Meng, F. Q.
2016-01-21
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.
Thermodynamic database for the Co-Pr system
Zhou, S. H.; Kramer, M. J.; Meng, F. Q.; ...
2016-03-01
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitled ''Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.''
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul
2017-01-01
eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA). PMID:28333085
Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke
2013-01-15
High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.
Towards G2G: Systems of Technology Database Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David
2005-01-01
We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.
A novel processed food classification system applied to Australian food composition databases.
O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A
2017-08-01
The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.
COINS: A composites information database system
NASA Technical Reports Server (NTRS)
Siddiqi, Shahid; Vosteen, Louis F.; Edlow, Ralph; Kwa, Teck-Seng
1992-01-01
An automated data abstraction form (ADAF) was developed to collect information on advanced fabrication processes and their related costs. The information will be collected for all components being fabricated as part of the ACT program and include in a COmposites INformation System (COINS) database. The aim of the COINS development effort is to provide future airframe preliminary design and fabrication teams with a tool through which production cost can become a deterministic variable in the design optimization process. The effort was initiated by the Structures Technology Program Office (STPO) of the NASA LaRC to implement the recommendations of a working group comprised of representatives from the commercial airframe companies. The principal working group recommendation was to re-institute collection of composite part fabrication data in a format similar to the DOD/NASA Structural Composites Fabrication Guide. The fabrication information collection form was automated with current user friendly computer technology. This work in progress paper describes the new automated form and features that make the form easy to use by an aircraft structural design-manufacturing team.
A structured vocabulary for indexing dietary supplements in databases in the United States
Saldanha, Leila G; Dwyer, Johanna T; Holden, Joanne M; Ireland, Jayne D.; Andrews, Karen W; Bailey, Regan L; Gahche, Jaime J.; Hardy, Constance J; Møller, Anders; Pilch, Susan M.; Roseland, Janet M
2011-01-01
Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing their ingredients in such databases. Differing approaches to classifying these products make it difficult to retrieve or link information effectively. A consistent approach to classifying information within food composition databases led to the development of LanguaL™, a structured vocabulary. LanguaL™ is being adapted as an interface tool for classifying and retrieving product information in dietary supplement databases. This paper outlines proposed changes to the LanguaL™ thesaurus for indexing dietary supplement products and ingredients in databases. The choice of 12 of the original 14 LanguaL™ facets pertinent to dietary supplements, modifications to their scopes, and applications are described. The 12 chosen facets are: Product Type; Source; Part of Source; Physical State, Shape or Form; Ingredients; Preservation Method, Packing Medium, Container or Wrapping; Contact Surface; Consumer Group/Dietary Use/Label Claim; Geographic Places and Regions; and Adjunct Characteristics of food. PMID:22611303
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
Database of Mechanical Properties of Textile Composites
NASA Technical Reports Server (NTRS)
Delbrey, Jerry
1996-01-01
This report describes the approach followed to develop a database for mechanical properties of textile composites. The data in this database is assembled from NASA Advanced Composites Technology (ACT) programs and from data in the public domain. This database meets the data documentation requirements of MIL-HDBK-17, Section 8.1.2, which describes in detail the type and amount of information needed to completely document composite material properties. The database focuses on mechanical properties of textile composite. Properties are available for a range of parameters such as direction, fiber architecture, materials, environmental condition, and failure mode. The composite materials in the database contain innovative textile architectures such as the braided, woven, and knitted materials evaluated under the NASA ACT programs. In summary, the database contains results for approximately 3500 coupon level tests, for ten different fiber/resin combinations, and seven different textile architectures. It also includes a limited amount of prepreg tape composites data from ACT programs where side-by-side comparisons were made.
NASA Astrophysics Data System (ADS)
Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu
2017-10-01
We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.
The development of a composition database of gluten-free products.
Mazzeo, Teresa; Cauzzi, Silvia; Brighenti, Furio; Pellegrini, Nicoletta
2015-06-01
To develop a composition database of a number of foods representative of different categories of gluten-free products in the Italian diet. The database was built using the nutritional composition of the products, taking into consideration both the composition of the ingredients and the nutritional information reported on the product label. The nutrient composition of each ingredient was obtained from two Italian databases (European Institute of Oncology and the National Institute for Food and Nutrition). The study developed a food composition database including a total of sixty foods representative of different categories of gluten-free products sold on the Italian market. The composition of the products included in the database is given in terms of quantity of macro- and micronutrients per 100 g of product as sold, and includes the full range of nutrient data present in traditional databases of gluten-containing foods. As expected, most of the products had a high content of carbohydrates and some of them can be labelled as a source of fibre (>3 g/100 g). Regarding micronutrients, among the products considered, breads, pizzas and snacks were especially very high in Na content (>400-500 mg/100 g). This database provides an initial useful tool for future nutritional surveys on the dietary habits of coeliac people.
Teachers in Schools with Low Socioeconomic Composition: Are They Really That Different?
ERIC Educational Resources Information Center
Danhier, Julien
2016-01-01
This article aims to assess whether differences in teacher characteristics vary with differences in socioeconomic compositions of schools. We conducted correlation analyses on administrative data from the French-speaking education system in Belgium. This database regroups more than 20,000 teachers in 1,630 elementary schools. We selected…
USDA-ARS?s Scientific Manuscript database
Food composition data play an essential role in many sectors, including nutrition, health, agriculture, environment, food labeling and trade. Over the last 25 years, International Network of Food Data Systems (INFOODS) has developed many international standards, guidelines and tools to obtain harmo...
The new on-line Czech Food Composition Database.
Machackova, Marie; Holasova, Marie; Maskova, Eva
2013-10-01
The new on-line Czech Food Composition Database (FCDB) was launched on http://www.czfcdb.cz in December 2010 as a main freely available channel for dissemination of Czech food composition data. The application is based on a complied FCDB documented according to the EuroFIR standardised procedure for full value documentation and indexing of foods by the LanguaL™ Thesaurus. A content management system was implemented for administration of the website and performing data export (comma-separated values or EuroFIR XML transport package formats) by a compiler. Reference/s are provided for each published value with linking to available freely accessible on-line sources of data (e.g. full texts, EuroFIR Document Repository, on-line national FCDBs). LanguaL™ codes are displayed within each food record as searchable keywords of the database. A photo (or a photo gallery) is used as a visual descriptor of a food item. The application is searchable on foods, components, food groups, alphabet and a multi-field advanced search. Copyright © 2013 Elsevier Ltd. All rights reserved.
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Application of materials database (MAT.DB.) to materials education
NASA Technical Reports Server (NTRS)
Liu, Ping; Waskom, Tommy L.
1994-01-01
Finding the right material for the job is an important aspect of engineering. Sometimes the choice is as fundamental as selecting between steel and aluminum. Other times, the choice may be between different compositions in an alloy. Discovering and compiling materials data is a demanding task, but it leads to accurate models for analysis and successful materials application. Mat. DB. is a database management system designed for maintaining information on the properties and processing of engineered materials, including metals, plastics, composites, and ceramics. It was developed by the Center for Materials Data of American Society for Metals (ASM) International. The ASM Center for Materials Data collects and reviews material property data for publication in books, reports, and electronic database. Mat. DB was developed to aid the data management and material applications.
NASA Technical Reports Server (NTRS)
Ho, C. Y.; Li, H. H.
1989-01-01
A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.
Dennis M. May
1998-01-01
Discusses a regional composite approach to managing timber product output data in a relational database. Describes the development and structure of the regional composite database and demonstrates its use in addressing everyday timber product output information needs.
An artificial system for selecting the optimal surgical team.
Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco
2015-01-01
We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.
Chen, Huan-Sheng; Cheng, Chun-Ting; Hou, Chun-Cheng; Liou, Hung-Hsiang; Chang, Cheng-Tsung; Lin, Chun-Ju; Wu, Tsai-Kun; Chen, Chang-Hsu; Lim, Paik-Seong
2017-07-01
Rapid screening and monitoring of nutritional status is mandatory in hemodialysis population because of the increasingly encountered nutritional problems. Considering the limitations of previous composite nutrition scores applied in this population, we tried to develop a standardized composite nutrition score (SCNS) using low lean tissue index as a marker of protein wasting to facilitate clinical screening and monitoring and to predict outcome. This retrospective cohort used 2 databases of dialysis populations from Taiwan between 2011 and 2014. First database consisting of data from 629 maintenance hemodialysis patients was used to develop the SCNS and the second database containing data from 297 maintenance hemodialysis patients was used to validate this developed score. SCNS containing albumin, creatinine, potassium, and body mass index was developed from the first database using low lean tissue index as a marker of protein wasting. When applying this score in the original database, significantly higher risk of developing protein wasting was found for patients with lower SCNS (odds ratio 1.38 [middle tertile vs highest tertile, P < .0001] and 2.40 [lowest tertile vs middle tertile, P < .0001]). The risk of death was also shown to be higher for patients with lower SCNS (hazard ratio 4.45 [below median level vs above median level, P < .0001]). These results were validated in the second database. We developed an SCNS consisting of 4 easily available biochemical parameters. This kind of scoring system can be easily applied in different dialysis facilities for screening and monitoring of protein wasting. The wide application of body composition monitor in dialysis population will also facilitate the development of specific nutrition scoring model for individual facility. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
da Silva, Nuno Pinho; Marques, Manuel; Carneiro, Gustavo; Costeira, João P.
2011-03-01
Painted tile panels (Azulejos) are one of the most representative Portuguese forms of art. Most of these panels are inspired on, and sometimes are literal copies of, famous paintings, or prints of those paintings. In order to study the Azulejos, art historians need to trace these roots. To do that they manually search art image databases, looking for images similar to the representation on the tile panel. This is an overwhelming task that should be automated as much as possible. Among several cues, the pose of humans and the general composition of people in a scene is quite discriminative. We build an image descriptor, combining the kinematic chain of each character, and contextual information about their composition, in the scene. Given a query image, our system computes its similarity profile over the database. Using nearest neighbors in the space of the descriptors, the proposed system retrieves the prints that most likely inspired the tiles' work.
Experimental study and thermodynamic modeling of the Al–Co–Cr–Ni system
Gheno, Thomas; Liu, Xuan L.; Lindwall, Greta; ...
2015-09-21
In this study, a thermodynamic database for the Al–Co–Cr–Ni system is built via the Calphad method by extrapolating re-assessed ternary subsystems. A minimum number of quaternary parameters are included, which are optimized using experimental phase equilibrium data obtained by electron probe micro-analysis and x-ray diffraction analysis of NiCoCrAlY alloys spanning a wide compositional range, after annealing at 900 °C, 1100 °C and 1200 °C, and water quenching. These temperatures are relevant to oxidation and corrosion resistant MCrAlY coatings, where M corresponds to some combination of nickel and cobalt. Comparisons of calculated and measured phase compositions show excellent agreement for themore » β–γ equilibrium, and good agreement for three-phase β–γ–σ and β–γ–α equilibria. An extensive comparison with existing Ni-base databases (TCNI6, TTNI8, NIST) is presented in terms of phase compositions.« less
Structural composite panel performance under long-term load
Theodore L. Laufenberg
1988-01-01
Information on the performance of wood-based structural composite panels under long-term load is currently needed to permit their use in engineered assemblies and systems. A broad assessment of the time-dependent properties of panels is critical for creating databases and models of the creep-rupture phenomenon that lead to reliability-based design procedures. This...
Gattiker, Alexandre; Niederhauser-Wiederkehr, Christa; Moore, James; Hermida, Leandro; Primig, Michael
2007-01-01
We report a novel release of the GermOnline knowledgebase covering genes relevant for the cell cycle, gametogenesis and fertility. GermOnline was extended into a cross-species systems browser including information on DNA sequence annotation, gene expression and the function of gene products. The database covers eight model organisms and Homo sapiens, for which complete genome annotation data are available. The database is now built around a sophisticated genome browser (Ensembl), our own microarray information management and annotation system (MIMAS) used to extensively describe experimental data obtained with high-density oligonucleotide microarrays (GeneChips) and a comprehensive system for online editing of database entries (MediaWiki). The RNA data include results from classical microarrays as well as tiling arrays that yield information on RNA expression levels, transcript start sites and lengths as well as exon composition. Members of the research community are solicited to help GermOnline curators keep database entries on genes and gene products complete and accurate. The database is accessible at http://www.germonline.org/.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Schultz, Martin; Brötz, Björn; Rauthe-Schöch, Armin; Thouret, Valérie
2015-04-01
IAGOS (In-service Aircraft for a Global Observing System) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. The IAGOS database (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data centre Ether (CNES and CNRS). In the framework of the IGAS project (IAGOS for Copernicus Atmospheric Service) interoperability with international portals or other databases is implemented in order to improve IAGOS data discovery. The IGAS data network is composed of three data centres: the IAGOS database in Toulouse including IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data since January 2015; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the MACC data centre in Jülich (http://join.iek.fz-juelich.de). The MACC (Monitoring Atmospheric Composition and Climate) project is a prominent user of the IGAS data network. In June 2015 a new version of the IAGOS database will be released providing improved services such as download in NetCDF or NASA Ames formats; graphical tools (maps, scatter plots, etc.); standardized metadata (ISO 19115) and a better users management. The link with the MACC data centre, through JOIN (Jülich OWS Interface), will allow to combine model outputs with IAGOS data for intercomparison. The interoperability within the IGAS data network, implemented thanks to many web services, will improve the functionalities of the web interfaces of each data centre.
Automated Database Mediation Using Ontological Metadata Mappings
Marenco, Luis; Wang, Rixin; Nadkarni, Prakash
2009-01-01
Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801
A Partnership for Public Health: USDA Branded Food Products Database
USDA-ARS?s Scientific Manuscript database
The importance of comprehensive food composition databases is more critical than ever in helping to address global food security. The USDA National Nutrient Database for Standard Reference is the “gold standard” for food composition databases. The presentation will include new developments in stren...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruger, Albert A.; Muller, I.; Gilbo, K.
2013-11-13
The objectives of this work are aimed at the development of enhanced LAW propertycomposition models that expand the composition region covered by the models. The models of interest include PCT, VHT, viscosity and electrical conductivity. This is planned as a multi-year effort that will be performed in phases with the objectives listed below for the current phase. Incorporate property- composition data from the new glasses into the database. Assess the database and identify composition spaces in the database that need augmentation. Develop statistically-designed composition matrices to cover the composition regions identified in the above analysis. Preparemore » crucible melts of glass compositions from the statistically-designed composition matrix and measure the properties of interest. Incorporate the above property-composition data into the database. Assess existing models against the complete dataset and, as necessary, start development of new models.« less
Thermodynamic Modeling of Hydrogen Storage Capacity in Mg-Na Alloys
Abdessameud, S.; Mezbahul-Islam, M.; Medraj, M.
2014-01-01
Thermodynamic modeling of the H-Mg-Na system is performed for the first time in this work in order to understand the phase relationships in this system. A new thermodynamic description of the stable NaMgH3 hydride is performed and the thermodynamic models for the H-Mg, Mg-Na, and H-Na systems are reassessed using the modified quasichemical model for the liquid phase. The thermodynamic properties of the ternary system are estimated from the models of the binary systems and the ternary compound using CALPHAD technique. The constructed database is successfully used to reproduce the pressure-composition isotherms for MgH2 + 10 wt.% NaH mixtures. Also, the pressure-temperature equilibrium diagram and reaction paths for the same composition are predicted at different temperatures and pressures. Even though it is proved that H-Mg-Na does not meet the DOE hydrogen storage requirements for onboard applications, the best working temperatures and pressures to benefit from its full catalytic role are given. Also, the present database can be used for thermodynamic assessments of higher order systems. PMID:25383361
Thermodynamic modeling of hydrogen storage capacity in Mg-Na alloys.
Abdessameud, S; Mezbahul-Islam, M; Medraj, M
2014-01-01
Thermodynamic modeling of the H-Mg-Na system is performed for the first time in this work in order to understand the phase relationships in this system. A new thermodynamic description of the stable NaMgH3 hydride is performed and the thermodynamic models for the H-Mg, Mg-Na, and H-Na systems are reassessed using the modified quasichemical model for the liquid phase. The thermodynamic properties of the ternary system are estimated from the models of the binary systems and the ternary compound using CALPHAD technique. The constructed database is successfully used to reproduce the pressure-composition isotherms for MgH2 + 10 wt.% NaH mixtures. Also, the pressure-temperature equilibrium diagram and reaction paths for the same composition are predicted at different temperatures and pressures. Even though it is proved that H-Mg-Na does not meet the DOE hydrogen storage requirements for onboard applications, the best working temperatures and pressures to benefit from its full catalytic role are given. Also, the present database can be used for thermodynamic assessments of higher order systems.
DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing
NASA Astrophysics Data System (ADS)
Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.
2014-12-01
The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.
Compositional descriptor-based recommender system for the materials discovery
NASA Astrophysics Data System (ADS)
Seko, Atsuto; Hayashi, Hiroyuki; Tanaka, Isao
2018-06-01
Structures and properties of many inorganic compounds have been collected historically. However, it only covers a very small portion of possible inorganic crystals, which implies the presence of numerous currently unknown compounds. A powerful machine-learning strategy is mandatory to discover new inorganic compounds from all chemical combinations. Herein we propose a descriptor-based recommender-system approach to estimate the relevance of chemical compositions where crystals can be formed [i.e., chemically relevant compositions (CRCs)]. In addition to data-driven compositional similarity used in the literature, the use of compositional descriptors as a prior knowledge is helpful for the discovery of new compounds. We validate our recommender systems in two ways. First, one database is used to construct a model, while another is used for the validation. Second, we estimate the phase stability for compounds at expected CRCs using density functional theory calculations.
Siegel, J; Kirkland, D
1991-01-01
The Composite Health Care System (CHCS), a MUMPS-based hospital information system (HIS), has evolved from the Decentralized Hospital Computer Program (DHCP) installed within VA Hospitals. The authors explore the evolution of an ancillary-based system toward an integrated model with a look at its current state and possible future. The history and relationships between orders of different types tie specific patient-related data into a logical and temporal model. Diagrams demonstrate how the database structure has evolved to support clinical needs for integration. It is suggested that a fully integrated model is capable of meeting traditional HIS needs.
Zhao, Yan-qing; Teng, Jing
2015-03-01
To analyze the composition and medication regularities of prescriptions treating hypochondriac pain in Chinese journal full-text database (CNKI) based on the traditional Chinese medicine inheritance support system, in order to provide a reference for further research and development for new traditional Chinese medicines treating hypochondriac pain. The traditional Chinese medicine inheritance support platform software V2. 0 was used to build a prescription database of Chinese medicines treating hypochondriac pain. The software integration data mining method was used to distribute prescriptions according to "four odors", "five flavors" and "meridians" in the database and achieve frequency statistics, syndrome distribution, prescription regularity and new prescription analysis. An analysis were made for 192 prescriptions treating hypochondriac pain to determine the frequencies of medicines in prescriptions, commonly used medicine pairs and combinations and summarize 15 new prescriptions. This study indicated that the prescriptions treating hypochondriac pain in Chinese journal full-text database are mostly those for soothing liver-qi stagnation, promoting qi and activating blood, clearing heat and promoting dampness, and invigorating spleen and removing phlem, with a cold property and bitter taste, and reflect the principles of "distinguish deficiency and excess and relieving pain by smoothening meridians" in treating hypochondriac pain.
Indexing of Patents of Pharmaceutical Composition in Online Databases
NASA Astrophysics Data System (ADS)
Online searching of patents of pharmaceutical composition is generally considered to be very difficult. It is due to the fact that the patent databases include extensive technical information as well as legal information so that they are not likely to have index proper to the pharmaceutical composition or even if they have such index, the scope and coverage of indexing is ambiguous. This paper discusses how patents of pharmaceutical composition are indexed in online databases such as WPl, CA, CLAIMS, USP and PATOLIS. Online searching of patents of pharmaceutical composition are also discussed in some detail.
Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun
2017-09-25
This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.
Martin, J N; Brooks, J C; Thompson, L D; Savell, J W; Harris, K B; May, L L; Haneklaus, A N; Schutz, J L; Belk, K E; Engle, T; Woerner, D R; Legako, J F; Luna, A M; Douglass, L W; Douglass, S E; Howe, J; Duvall, M; Patterson, K Y; Leheska, J L
2013-11-01
Beef nutrition is important to the worldwide beef industry. The objective of this study was to analyze proximate composition of eight beef rib and plate cuts to update the USDA National Nutrient Database for Standard Reference (SR). Furthermore, this study aimed to determine the influence of USDA Quality Grade on the separable components and proximate composition of the examined retail cuts. Carcasses (n=72) representing a composite of Yield Grade, Quality Grade, gender and genetic type were identified from six regions across the U.S. Beef plates and ribs (IMPS #109 and 121C and D) were collected from the selected carcasses and shipped to three university meat laboratories for storage, retail fabrication, cooking, and dissection and analysis of proximate composition. These data provide updated information regarding the nutrient content of beef and emphasize the influence of common classification systems (Yield Grade and Quality Grade) on the separable components, cooking yield, and proximate composition of retail beef cuts. Copyright © 2013 Elsevier Ltd. All rights reserved.
Julia, Chantal; Kesse-Guyot, Emmanuelle; Touvier, Mathilde; Méjean, Caroline; Fezeu, Léopold; Hercberg, Serge
2014-11-28
Nutrient profiling systems are powerful tools for public health initiatives, as they aim at categorising foods according to their nutritional quality. The British Food Standards Agency (FSA) nutrient profiling system (FSA score) has been validated in a British food database, but the application of the model in other contexts has not yet been evaluated. The objective of the present study was to assess the application of the British FSA score in a French food composition database. Foods from the French NutriNet-Santé study food composition table were categorised according to their FSA score using the Office of Communication (OfCom) cut-off value ('healthier' ≤ 4 for foods and ≤ 1 for beverages; 'less healthy' >4 for foods and >1 for beverages) and distribution cut-offs (quintiles for foods, quartiles for beverages). Foods were also categorised according to the food groups used for the French Programme National Nutrition Santé (PNNS) recommendations. Foods were weighted according to their relative consumption in a sample drawn from the NutriNet-Santé study (n 4225), representative of the French population. Classification of foods according to the OfCom cut-offs was consistent with food groups described in the PNNS: 97·8 % of fruit and vegetables, 90·4 % of cereals and potatoes and only 3·8 % of sugary snacks were considered as 'healthier'. Moreover, variability in the FSA score allowed for a discrimination between subcategories in the same food group, confirming the possibility of using the FSA score as a multiple category system, for example as a basis for front-of-pack nutrition labelling. Application of the FSA score in the French context would adequately complement current public health recommendations.
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, K.C.; Chiu, S.Y.; Ditmars, J.D.
1994-05-01
The MIDAS (Munition Items Disposition Action System) database system is an electronic data management system capable of storage and retrieval of information on the detailed structures and material compositions of munitions items designated for demilitarization. The types of such munitions range from bulk propellants and small arms to projectiles and cluster bombs. The database system is also capable of processing data on the quantities of inert, PEP (propellant, explosives and pyrotechnics) and packaging materials associated with munitions, components, or parts, and the quantities of chemical compounds associated with parts made of PEP materials. Development of the MIDAS database system hasmore » been undertaken by the US Army to support disposition of unwanted ammunition stockpiles. The inventory of such stockpiles currently includes several thousand items, which total tens of thousands of tons, and is still growing. Providing systematic procedures for disposing of all unwanted conventional munitions is the mission of the MIDAS Demilitarization Program. To carry out this mission, all munitions listed in the Single Manager for Conventional Ammunition inventory must be characterized, and alternatives for resource recovery and recycling and/or disposal of munitions in the demilitarization inventory must be identified.« less
Molecular Biogeochemistry of Modern and Ancient Marine Microbes
2010-02-01
number distributions in the late Archean bitumens fall within the range of compositions of Phanerozoic petroleum ( gray line in Fig. 7), suggesting that...bitumen extracts. The gray line indicates the range of compositions observed in Phanerozoic petroleum systems, from the GeoMark Reservoir Fluid Database...than that of mRNA are attributable to noisy, non-cycling protein timecourses ( gray points above 1:1 line). For clarity, only genes whose protein
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-04-21
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products ( n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS 'Pass' if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were 'reformulated' to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%-40%; 5%-72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%-44%) and TF (23%-42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases.
A computational study of diffusion in a glass-forming metallic liquid
Wang, T.; Zhang, F.; Yang, L.; ...
2015-06-09
In this study, liquid phase diffusion plays a critical role in phase transformations (e.g. glass transformation and devitrification) observed in marginal glass forming systems such as Al-Sm. Controlling transformation pathways in such cases requires a comprehensive description of diffusivity, including the associated composition and temperature dependencies. In our computational study, we examine atomic diffusion in Al-Sm liquids using ab initio molecular dynamics (AIMD) and determine the diffusivities of Al and Sm for selected alloy compositions. Non-Arrhenius diffusion behavior is observed in the undercooled liquids with an enhanced local structural ordering. Through assessment of our AIMD result, we construct a generalmore » formulation for Al-Sm liquid, involving a diffusion mobility database that includes composition and temperature dependence. A Volmer-Fulcher-Tammann (VFT) equation is adopted for describing the non-Arrhenius behavior observed in the undercooled liquid. Furthermore, the composition dependence of diffusivity is found quite strong, even for the Al-rich region contrary to the sole previous report on this binary system. The model is used in combination with the available thermodynamic database to predict specific diffusivities and compares well with reported experimental data for 0.6 at.% and 5.6 at.% Sm in Al-Sm alloys.« less
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G; Herrera-Cuenca, Marianella; Zimberg, Ioná Z; Tucker, Katherine L; Koletzko, Berthold; Pratt, Michael
2015-09-16
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region.
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G.; Herrera-Cuenca, Marianella; Zimberg, Ioná Z.; Tucker, Katherine L.; Koletzko, Berthold; Pratt, Michael
2015-01-01
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region. PMID:26389952
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
The Ins and Outs of USDA Nutrient Composition
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference (SR) is the major source of food composition data in the United States, providing the foundation for most food composition databases in the public and private sectors. Sources of data used in SR include analytical studies, food manufacturer...
RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.
Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S
2017-02-02
In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Argonne Geothermal Geochemical Database v2.0
Harto, Christopher
2013-05-22
A database of geochemical data from potential geothermal sources aggregated from multiple sources as of March 2010. The database contains fields for the location, depth, temperature, pH, total dissolved solids concentration, chemical composition, and date of sampling. A separate tab contains data on non-condensible gas compositions. The database contains records for over 50,000 wells, although many entries are incomplete. Current versions of source documentation are listed in the dataset.
Composite Materials Design Database and Data Retrieval System Requirements
1991-08-01
the present time, the majority of expert systems are stand-alone systems, and environments for effectively coupling heuristic data management with...nonheuristic data management remain to be developed. The only available recourse is to resort to traditional DBMS development and use, and to service...Organization for Data Management . Academic Press, 1986. Glaeser, P. S. (ed). " Data for Science and Technology." Proceedings of the Seventh
Tang, Shi-Huan; Shen, Dan; Yang, Hong-Jun
2017-08-24
To analyze the composition rules of oral prescriptions in the treatment of headache, stomachache and dysmenorrhea recorded in National Standard for Chinese Patent Drugs (NSCPD) enacted by Ministry of Public Health of China and then make comparison between them to better understand pain treatment in different regions of human body. Constructed NSCPD database had been constructed in 2014. Prescriptions treating the three pain-related diseases were searched and screened from the database. Then data mining method such as association rules analysis and complex system entropy method integrated in the data mining software Traditional Chinese Medicine Inheritance Support System (TCMISS) were applied to process the data. Top 25 drugs with high frequency in the treatment of each disease were selected, and 51, 33 and 22 core combinations treating headache, stomachache and dysmenorrhea respectively were mined out as well. The composition rules of the oral prescriptions for treating headache, stomachache and dysmenorrhea recorded in NSCPD has been summarized. Although there were similarities between them, formula varied according to different locations of pain. It can serve as an evidence and reference for clinical treatment and new drug development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jantzen, Carol M.; Trivelpiece, Cory L.; Crawford, Charles L.
Glass corrosion data from the ALTGLASS™ database were used to determine if gel compositions, which evolve as glass systems corrode, are correlated with the generation of zeolites and subsequent increase in the glass dissolution rate at long times. The gel compositions were estimated based on the difference between the elemental glass starting compositions and the measured elemental leachate concentrations from the long-term product consistency tests (ASTM C1285) at various stages of dissolution, ie, reaction progress. A well-characterized subset of high level waste glasses from the database was selected: these glasses had been leached for 15-20 years at reaction progresses upmore » to ~80%. The gel composition data, at various reaction progresses, were subjected to a step-wise regression, which demonstrated that hydrogel compositions with Si*/Al* ratios of <1.0 did not generate zeolites and maintained low dissolution rates for the duration of the experiments. Glasses that formed hydrogel compositions with Si^*/Al^* ratios ≥1, generated zeolites accompanied by a resumption in the glass dissolution rate. Finally, the role of the gel Si/Al ratio, and the interactions with the leachate, provides the fundamental understanding needed to predict if and when the glass dissolution rate will increase due to zeolitization.« less
Jantzen, Carol M.; Trivelpiece, Cory L.; Crawford, Charles L.; ...
2017-02-18
Glass corrosion data from the ALTGLASS™ database were used to determine if gel compositions, which evolve as glass systems corrode, are correlated with the generation of zeolites and subsequent increase in the glass dissolution rate at long times. The gel compositions were estimated based on the difference between the elemental glass starting compositions and the measured elemental leachate concentrations from the long-term product consistency tests (ASTM C1285) at various stages of dissolution, ie, reaction progress. A well-characterized subset of high level waste glasses from the database was selected: these glasses had been leached for 15-20 years at reaction progresses upmore » to ~80%. The gel composition data, at various reaction progresses, were subjected to a step-wise regression, which demonstrated that hydrogel compositions with Si*/Al* ratios of <1.0 did not generate zeolites and maintained low dissolution rates for the duration of the experiments. Glasses that formed hydrogel compositions with Si^*/Al^* ratios ≥1, generated zeolites accompanied by a resumption in the glass dissolution rate. Finally, the role of the gel Si/Al ratio, and the interactions with the leachate, provides the fundamental understanding needed to predict if and when the glass dissolution rate will increase due to zeolitization.« less
Food composition database development for between country comparisons.
Merchant, Anwar T; Dehghan, Mahshid
2006-01-19
Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.
Building a medical image processing algorithm verification database
NASA Astrophysics Data System (ADS)
Brown, C. Wayne
2000-06-01
The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-01-01
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products (n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS ‘Pass’ if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were ‘reformulated’ to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%–40%; 5%–72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%–44%) and TF (23%–42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases. PMID:28430118
LEPER: Library of Experimental PhasE Relations
NASA Astrophysics Data System (ADS)
Davis, F.; Gordon, S.; Mukherjee, S.; Hirschmann, M.; Ghiorso, M.
2006-12-01
The Library of Experimental PhasE Relations (LEPER) seeks to compile published experimental determinations of magmatic phase equilibria and provide those data on the web with a searchable and downloadable interface. Compiled experimental data include the conditions and durations of experiments, the bulk compositions of experimental charges, and the identity, compositions and proportions of phases observed, and, where available, estimates of experimental and analytical uncertainties. Also included are metadata such as the type of experimental device, capsule material, and method(s) of quantitative analysis. The database may be of use to practicing experimentalists as well as the wider Earth science community. Experimentalists may find the data useful for planning new experiments and will easily be able to compare their results to the full body of previous experimentnal data. Geologists may use LEPER to compare rocks sampled in the field with experiments performed on similar bulk composition or with experiments that produced similar-composition product phases. Modelers may use LEPER to parameterize partial melting of various lithologies. One motivation for compiling LEPER is for calibration of updated and revised versions of MELTS, however, it is hoped that the availability of LEPER will facilitate formulation and calibration of additional thermodynamic or empirical models of magmatic phase relations and phase equilibria, geothermometers and more. Data entry for LEPER is occuring presently: As of August, 2006, >6200 experiments have been entered, chiefly from work published between 1997 and 2005. A prototype web interface has been written and beta release on the web is anticipated in Fall, 2006. Eventually, experimentalists will be able to submit their new experimental data to the database via the web. At present, the database contains only data pertaining to the phase equilibria of silicate melts, but extension to other experimental data involving other fluids or sub-solidus phase equilibria may be contemplated. Also, the data are at present limited to natural or near-natural systems, but in the future, extension to synthetic (i.e., CMAS, etc.) systems is also possible. Each would depend in part on whether there is community demand for such databases. A trace element adjunct to LEPER is presently in planning stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This document contains reports which were presented at the 41st International Society For The Advancement of Material and Process Engineering Symposium and Exhibition. Topics include: structural integrity of aging aircraft; composite materials development; affordable composites and processes; corrosion characterization of aging aircraft; adhesive advances; composite design; dual use materials and processing; repair of aircraft structures; adhesive inspection; materials systems for infrastructure; fire safety; composite impact/energy absorption; advanced materials for space; seismic retrofit; high temperature resins; preform technology; thermoplastics; alternative energy and transportation; manufacturing; and durability. Individual reports have been processed separately for the United States Department of Energy databases.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less
Román Colón, Yomayra A.; Ruppert, Leslie F.
2015-01-01
The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.
Ultrasonic Fluid Quality Sensor System
Gomm, Tyler J.; Kraft, Nancy C.; Phelps, Larry D.; Taylor, Steven C.
2003-10-21
A system for determining the composition of a multiple-component fluid and for determining linear flow comprising at least one sing-around circuit that determines the velocity of a signal in the multiple-component fluid and that is correlatable to a database for the multiple-component fluid. A system for determining flow uses two of the inventive circuits, one of which is set at an angle that is not perpendicular to the direction of flow.
Ultrasonic fluid quality sensor system
Gomm, Tyler J.; Kraft, Nancy C.; Phelps, Larry D.; Taylor, Steven C.
2002-10-08
A system for determining the composition of a multiple-component fluid and for determining linear flow comprising at least one sing-around circuit that determines the velocity of a signal in the multiple-component fluid and that is correlatable to a database for the multiple-component fluid. A system for determining flow uses two of the inventive circuits, one of which is set at an angle that is not perpendicular to the direction of flow.
Thermodynamic assessment of the LiF-NaF-BeF2-ThF4-UF4 system
NASA Astrophysics Data System (ADS)
Capelli, E.; Beneš, O.; Konings, R. J. M.
2014-06-01
The present study describes the full thermodynamic assessment of the LiF-NaF-BeF2-ThF4-UF4 system which is one of the key systems considered for a molten salt reactor fuel. The work is an extension of the previously assessed LiF-NaF-ThF4-UF4 system with addition of BeF2 which is characterized by very low neutron capture cross section and a relatively low melting point. To extend the database the binary BeF2-ThF4 and BeF2-UF4 systems were optimized and the novel data were used for the thermodynamic assessment of BeF2 containing ternary systems for which experimental data exist in the literature. The obtained database is used to optimize the molten salt reactor fuel composition and to assess its properties with the emphasis on the melting behaviour.
Lupiañez-Barbero, Ascension; González Blanco, Cintia; de Leiva Hidalgo, Alberto
2018-05-23
Food composition tables and databases (FCTs or FCDBs) provide the necessary information to estimate intake of nutrients and other food components. In Spain, the lack of a reference database has resulted in use of different FCTs/FCDBs in nutritional surveys and research studies, as well as for development of dietetic for diet analysis. As a result, biased, non-comparable results are obtained, and healthcare professionals are rarely aware of these limitations. AECOSAN and the BEDCA association developed a FCDB following European standards, the Spanish Food Composition Database Network (RedBEDCA).The current database has a limited number of foods and food components and barely contains processed foods, which limits its use in epidemiological studies and in the daily practice of healthcare professionals. Copyright © 2018 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Food composition is the determination of what is in the foods we eat and is the critical bridge between nutrition, health promotion and disease prevention and food production. Compilation of data into useable databases is essential to the development of dietary guidance for individuals and populat...
Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah
2012-01-01
This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan
2017-02-01
Visual information of similar nodules could assist the budding radiologists in self-learning. This paper presents a content-based image retrieval (CBIR) system for pulmonary nodules, observed in lung CT images. The reported CBIR systems of pulmonary nodules cannot be put into practice as radiologists need to draw the boundary of nodules during query formation and feature database creation. In the proposed retrieval system, the pulmonary nodules are segmented using a semi-automated technique, which requires a seed point on the nodule from the end-user. The involvement of radiologists in feature database creation is also reduced, as only a seed point is expected from radiologists instead of manual delineation of the boundary of the nodules. The performance of the retrieval system depends on the accuracy of the segmentation technique. Several 3D features are explored to improve the performance of the proposed retrieval system. A set of relevant shape and texture features are considered for efficient representation of the nodules in the feature space. The proposed CBIR system is evaluated for three configurations such as configuration-1 (composite rank of malignancy "1","2" as benign and "4","5" as malignant), configuration-2 (composite rank of malignancy "1","2", "3" as benign and "4","5" as malignant), and configuration-3 (composite rank of malignancy "1","2" as benign and "3","4","5" as malignant). Considering top 5 retrieved nodules and Euclidean distance metric, the precision achieved by the proposed method for configuration-1, configuration-2, and configuration-3 are 82.14, 75.91, and 74.27 %, respectively. The performance of the proposed CBIR system is close to the most recent technique, which is dependent on radiologists for manual segmentation of nodules. A computer-aided diagnosis (CAD) system is also developed based on CBIR paradigm. Performance of the proposed CBIR-based CAD system is close to performance of the CAD system using support vector machine.
Crystallization kinetics of Mg–Cu–Yb–Ca–Ag metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsarkov, Andrey A., E-mail: tsarkov@misis.ru; WPI Advanced Institute for Materials Research, Tohoku University, Katahira 2-1-1, Aoba-Ku, Sendai 980-8577; Zanaeva, Erzhena N.
The paper presents research into a Mg–Cu–Yb system based metallic glassy alloys. Metallic glasses were prepared using induction melting and further injection on a spinning copper wheel. The effect of alloying by Ag and Ca on the glass forming ability and the kinetics of crystallization of Mg–Cu–Yb system based alloys were studied. The differential scanning calorimeter and X-ray diffractometer were used to investigate the kinetics of crystallization and the phase composition of the samples. An indicator of glass forming ability, effective activation energy of crystallization, and enthalpy of mixing were calculated. An increase of the Ca and Ag content hasmore » a positive effect on the glass forming ability, the effective activation energy of crystallization, and the enthalpy of mixing. The highest indicators of the glass forming ability and the thermal stability were found for alloys that contain both alloying elements. The Ag addition suppresses precipitation of the Mg{sub 2}Cu phase during crystallization. A dual-phase glassy-nanocrystalline Mg structure was obtained in Mg{sub 65}Cu{sub 25}Yb{sub 10} and Mg{sub 59.5}Cu{sub 22.9}Yb{sub 11}Ag{sub 6.6} alloys after annealing. Bulk samples with a composite glassy-crystalline structure were obtained in Mg{sub 59.5}Cu{sub 22.9}Yb{sub 11}Ag{sub 6.6} and Mg{sub 64}Cu{sub 21}Yb{sub 9.5}Ag{sub 5.5} alloys. A thermodynamic database for the Mg–Cu–Yb–Ca–Ag system was created to compare the process of crystallization of alloys with polythermal sections of the Mg–Cu–Yb–Ca–Ag phase diagram. - Highlights: • New alloy compositions based on Mg–Cu–Yb system were developed and investigated. • Increasing content of Ag and Ca leads to improving GFA. • Bulk samples with a composite glassy-crystalline structure were obtained. • Thermodynamic database for Mg–Cu–Yb–Ca–Ag system was created.« less
Review of availability of food composition data for fish and shellfish.
Rittenschober, Doris; Nowak, Verena; Charrondiere, U Ruth
2013-12-15
The FAO/INFOODS database on fish and shellfish (aFiSh) is a collection of analytical data from primary sources and holds values for 2,277 entries on raw and processed food with sufficient quality. Most data were entered on fatty acids (60%), followed by macronutrients and their fractions (16%), minerals (10%), amino acids (7%), (pro)vitamins (2%), heavy metals (2%) and other components (3%). Information on several factors that contribute to the variation of compositional data (e.g., biodiversity, catch season, habitat, size and part of fish/shellfish analysed) as well as the bibliographic references are presented alongside with each food entry. The data were published in the FAO/INFOODS Food Composition Database for Biodiversity (BioFoodComp2.0) and in the FAO/INFOODS Analytical Food Composition Database (AnFooD1.0), freely available at the INFOODS webpage http://www.fao.org/infoods/biodiversity/index_en.stm. The provision of easy accessible, analytical compositional data should be seen as stimulation for researchers and compilers to incorporate more analytical and detailed data of fish and shellfish into future food composition tables and databases and to improve dietary assessment tools. Copyright © 2013 Food and Agriculture Organization of the United Nations. Published by Elsevier Ltd.. All rights reserved.
Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L
2008-11-01
Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.
Scheil-Gulliver Constituent Diagrams
NASA Astrophysics Data System (ADS)
Pelton, Arthur D.; Eriksson, Gunnar; Bale, Christopher W.
2017-06-01
During solidification of alloys, conditions often approach those of Scheil-Gulliver cooling in which it is assumed that solid phases, once precipitated, remain unchanged. That is, they no longer react with the liquid or with each other. In the case of equilibrium solidification, equilibrium phase diagrams provide a valuable means of visualizing the effects of composition changes upon the final microstructure. In the present study, we propose for the first time the concept of Scheil-Gulliver constituent diagrams which play the same role as that in the case of Scheil-Gulliver cooling. It is shown how these diagrams can be calculated and plotted by the currently available thermodynamic database computing systems that combine Gibbs energy minimization software with large databases of optimized thermodynamic properties of solutions and compounds. Examples calculated using the FactSage system are presented for the Al-Li and Al-Mg-Zn systems, and for the Au-Bi-Sb-Pb system and its binary and ternary subsystems.
MOSAIC: An organic geochemical and sedimentological database for marine surface sediments
NASA Astrophysics Data System (ADS)
Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy
2015-04-01
Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into MOSAIC. MOSAIC is programmed with PostgreSQL, an open-source database management system. In order to locate geographically the data, each element/datum is associated to a latitude, longitude and depth, facilitating creation of a geospatial database which can be easily interfaced to a Geographic Information System (GIS). In order to make the database broadly accessible, a HTML-PHP language-based website will ultimately be created and linked to the database. Consulting the website will allow for both data visualization as well as export of data in txt format for utilization with common software solutions (e.g. ODV, Excel, Matlab, Python, Word, PPT, Illustrator…). In this very early stage, MOSAIC presently contains approximately 10000 analyses conducted on more than 1800 samples which were collected from over 1600 different geographical locations around the world. Through participation of the international research community, MOSAIC will rapidly develop into a rich archive and versatile tool for investigation of distribution and composition of organic matter accumulating in seafloor sediments. The present contribution will outline the structure of MOSAIC, provide examples of data output, and solicit feedback on desirable features to be included in the database and associated software tools.
DOT National Transportation Integrated Search
2006-01-01
The Transportation-Markings Database project (within the T-M Monograph Series) began in 1997 with the publishing of the initial component, Transportation-Markings Database: Marine. That study was joined by T-M Database: Traffic Control Devices (1998)...
Generation of large scale urban environments to support advanced sensor and seeker simulation
NASA Astrophysics Data System (ADS)
Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan
2009-05-01
One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.
NASA Astrophysics Data System (ADS)
Mangosing, D. C.; Chen, G.; Kusterer, J.; Rinsland, P.; Perez, J.; Sorlie, S.; Parker, L.
2011-12-01
One of the objectives of the NASA Langley Research Center's MEaSURES project, "Creating a Unified Airborne Database for Model Assessment", is the development of airborne Earth System Data Records (ESDR) for the regional and global model assessment and validation activities performed by the tropospheric chemistry and climate modeling communities. The ongoing development of ADAM, a web site designed to access a unified, standardized and relational ESDR database, meets this objective. The ESDR database is derived from publically available data sets, from NASA airborne field studies to airborne and in-situ studies sponsored by NOAA, NSF, and numerous international partners. The ADAM web development activities provide an opportunity to highlight a growing synergy between the Airborne Science Data for Atmospheric Composition (ASD-AC) group at NASA Langley and the NASA Langley's Atmospheric Sciences Data Center (ASDC). These teams will collaborate on the ADAM web application by leveraging the state-of-the-art service and message-oriented data distribution architecture developed and implemented by ASDC and using a web-based tool provided by the ASD-AC group whose user interface accommodates the nuanced perspective of science users in the atmospheric chemistry and composition and climate modeling communities.
Integrating In Silico Resources to Map a Signaling Network
Liu, Hanqing; Beck, Tim N.; Golemis, Erica A.; Serebriiskii, Ilya G.
2013-01-01
The abundance of publicly available life science databases offer a wealth of information that can support interpretation of experimentally derived data and greatly enhance hypothesis generation. Protein interaction and functional networks are not simply new renditions of existing data: they provide the opportunity to gain insights into the specific physical and functional role a protein plays as part of the biological system. In this chapter, we describe different in silico tools that can quickly and conveniently retrieve data from existing data repositories and discuss how the available tools are best utilized for different purposes. While emphasizing protein-protein interaction databases (e.g., BioGrid and IntAct), we also introduce metasearch platforms such as STRING and GeneMANIA, pathway databases (e.g., BioCarta and Pathway Commons), text mining approaches (e.g., PubMed and Chilibot), and resources for drug-protein interactions, genetic information for model organisms and gene expression information based on microarray data mining. Furthermore, we provide a simple step-by-step protocol to building customized protein-protein interaction networks in Cytoscape, a powerful network assembly and visualization program, integrating data retrieved from these various databases. As we illustrate, generation of composite interaction networks enables investigators to extract significantly more information about a given biological system than utilization of a single database or sole reliance on primary literature. PMID:24233784
Updated folate data in the Dutch Food Composition Database and implications for intake estimates
Westenbrink, Susanne; Jansen-van der Vliet, Martine; van Rossum, Caroline
2012-01-01
Background and objective Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC) or by microbiological assay (MA) yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO) were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS). Design The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes when using the updated values. PMID:22481900
Dietary fibre: challenges in production and use of food composition data.
Westenbrink, Susanne; Brunt, Kommer; van der Kamp, Jan-Willem
2013-10-01
Dietary fibre is a heterogeneous group of components for which several definitions and analytical methods were developed over the past decades, causing confusion among users and producers of dietary fibre data in food composition databases. An overview is given of current definitions and analytical methods. Some of the issues related to maintaining dietary fibre values in food composition databases are discussed. Newly developed AOAC methods (2009.01 or modifications) yield higher dietary fibre values, due to the inclusion of low molecular weight dietary fibre and resistant starch. For food composition databases procedures need to be developed to combine 'classic' and 'new' dietary fibre values since re-analysing all foods on short notice is impossible due to financial restrictions. Standardised value documentation procedures are important to evaluate dietary fibre values from several sources before exchanging and using the data, e.g. for dietary intake research. Copyright © 2012 Elsevier Ltd. All rights reserved.
2016-03-14
microbiology data from MHS facilities were used to identify all Klebsiella spp. isolates. The isolates were matched to three databases: (1) HL7...Klebsiella species infections among DON and DOD beneficiaries. HL7 formatted microbiology data that originated from the Composite Health Care System...and inpatient isolates as determined by the Medical Expense and Performance Reporting System (MEPRS) codes in microbiology data. A MEPRS code
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
USDA-ARS?s Scientific Manuscript database
For nearly 20 years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture’s (USDA) food composition databases through the collection and analysis of nationally representative food samples. This manuscript d...
USDA-ARS?s Scientific Manuscript database
Beef nutrition research has become increasingly important domestically and internationally for the beef industry and its consumers. The objective of this study was to analyze the nutrient composition of ten beef loin and round cuts to update the nutrient data in the USDA National Nutrient Database f...
Intelligent community management system based on the devicenet fieldbus
NASA Astrophysics Data System (ADS)
Wang, Yulan; Wang, Jianxiong; Liu, Jiwen
2013-03-01
With the rapid development of the national economy and the improvement of people's living standards, people are making higher demands on the living environment. And the estate management content, management efficiency and service quality have been higher required. This paper in-depth analyzes about the intelligent community of the structure and composition. According to the users' requirements and related specifications, it achieves the district management systems, which includes Basic Information Management: the management level of housing, household information management, administrator-level management, password management, etc. Service Management: standard property costs, property charges collecting, the history of arrears and other property expenses. Security Management: household gas, water, electricity and security and other security management, security management district and other public places. Systems Management: backup database, restore database, log management. This article also carries out on the Intelligent Community System analysis, proposes an architecture which is based on B / S technology system. And it has achieved a global network device management with friendly, easy to use, unified human - machine interface.
3D THz hyperspectrum applied in security check-in
NASA Astrophysics Data System (ADS)
Damian, V.; Logofǎtu, P. C.; Vasile, T.
2016-12-01
We developed a measuring technology using a TDS-THz system to construct hyperspectral images of some objects, including hazardous materials. "T-rays" (the THz spectral domain of the light) have a growing importance in security and imagistic domain. Due to their property of penetrating through dielectric objects, and using non-ionizing radiations, the THz systems have become a standard for "hot-places" (airports, train stations etc.). The hyperspectral images are 3D images having 2D spatial dimension and one spectral dimension. In this way, we obtain simultaneously information about the form of the object and its molecular composition. For discriminating between substances, we must first build a database of spectra for hazardous and dangerous substances. We experiment our system on some items (among them a firecracker, a cigarette and a metal collar) and we tried to discriminate between them using the database of spectra.
Spatial and symbolic queries for 3D image data
NASA Astrophysics Data System (ADS)
Benson, Daniel C.; Zick, Gregory L.
1992-04-01
We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.
Clauson, Kevin A; Polen, Hyla H; Marsh, Wallace A
2007-12-01
To evaluate personal digital assistant (PDA) drug information databases used to support clinical decision-making, and to compare the performance of PDA databases with their online versions. Prospective evaluation with descriptive analysis. Five drug information databases available for PDAs and online were evaluated according to their scope (inclusion of correct answers), completeness (on a 3-point scale), and ease of use; 158 question-answer pairs across 15 weighted categories of drug information essential to health care professionals were used to evaluate these databases. An overall composite score integrating these three measures was then calculated. Scores for the PDA databases and for each PDA-online pair were compared. Among the PDA databases, composite rankings, from highest to lowest, were as follows: Lexi-Drugs, Clinical Pharmacology OnHand, Epocrates Rx Pro, mobileMicromedex (now called Thomson Clinical Xpert), and Epocrates Rx free version. When we compared database pairs, online databases that had greater scope than their PDA counterparts were Clinical Pharmacology (137 vs 100 answers, p<0.001), Micromedex (132 vs 96 answers, p<0.001), Lexi-Comp Online (131 vs 119 answers, p<0.001), and Epocrates Online Premium (103 vs 98 answers, p=0.001). Only Micromedex online was more complete than its PDA version (p=0.008). Regarding ease of use, the Lexi-Drugs PDA database was superior to Lexi-Comp Online (p<0.001); however, Epocrates Online Premium, Epocrates Online Free, and Micromedex online were easier to use than their PDA counterparts (p<0.001). In terms of composite scores, only the online versions of Clinical Pharmacology and Micromedex demonstrated superiority over their PDA versions (p>0.01). Online and PDA drug information databases assist practitioners in improving their clinical decision-making. Lexi-Drugs performed significantly better than all of the other PDA databases evaluated. No PDA database demonstrated superiority to its online counterpart; however, the online versions of Clinical Pharmacology and Micromedex were superior to their PDA versions in answering questions.
Madrid Troconis, Cristhian Camilo; Santos-Silva, Alan Roger; Brandão, Thaís Bianca; Lopes, Marcio Ajudarte; de Goes, Mario Fernando
2017-11-01
To analyze the evidence regarding the impact of head and neck radiotherapy (HNRT) on the mechanical behavior of composite resins and adhesive systems. Searches were conducted on PubMed, Embase, Scopus and ISI Web of Science databases using "Radiotherapy", "Composite resins" and "Adhesive systems" as keywords. Selected studies were written in English and assessed the mechanical behavior of composite resins and/or adhesive systems when bonding procedure was conducted before and/or after a maximum radiation dose ≥50Gy, applied under in vitro or in vivo conditions. In total, 115 studies were found but only 16 were included, from which five evaluated the effect of in vitro HNRT on microhardness, wear resistance, diametral tensile and flexural strength of composite resins, showing no significant negative effect in most of reports. Regarding bond strength of adhesive systems, 11 studies were included from which five reported no meaningful negative effect when bonding procedure was conducted before simulated HNRT. Conversely, five studies showed that bond strength diminished when adhesive procedure was done after in vitro radiation therapy. Only two studies about dental adhesion were conducted after in vivo radiotherapy but the results were not conclusive. The mechanical behavior of composite resins and adhesive systems seems not to be affected when in vitro HNRT is applied after bonding procedure. However, bond strength of adhesive systems tends to decrease when simulated radiotherapy is used immediately before bonding procedure. Studies assessing dentin bond strength after in-vivo HNRT were limited and controversial. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Beef nutrition is very important to the worldwide beef industry and its consumers. The objective of this study was to analyze nutrient composition of eight beef rib and plate cuts to update the nutrient data in the USDA National Nutrient Database for Standard Reference (SR). Seventy-two carcasses ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrma, P.R.; Vienna, J.D.; Pelton, A.D.
In an earlier report [92 Pel] was described the development of software and thermodynamic databases for the calculation of liquidus temperatures of glasses of HWVP products containing the components SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O-Li{sub 2}O-CaO-MgO-Fe{sub 2}O{sub 3}-Al{sub 2}O{sub 3}-ZrO{sub 2}-{open_quotes}others{close_quotes}. The software package developed at that time consisted of the EQUILIB program of the F*A*C*T computer system with special input/output routines. Since then, Battelle has purchased the entire F*A*C*T computer system, and this fully replaces the earlier package. Furthermore, with the entire F*A*C*T system, additional calculations can be performed such as calculations at fixed O{sub 2}, SO{sub 2} etc. pressures,more » or graphing of output. Furthermore, the public F*A*C*T database of over 5000 gaseous species and condensed phases is now accessible. The private databases for the glass and crystalline phases were developed for Battelle by optimization of thermodynamic and phase diagram data. That is, all available data for 2- and 3-component sub-systems of the 9-component oxide system were collected, and parameters of model equations for the thermodynamic properties were found which best reproduce all the data. For representing the thermodynamic properties of the glass as a function of composition and temperature, the modified quasichemical model was used. This model was described in the earlier report [92 Pel] along with all the optimizations. With the model, it was possible to predict the thermodynamic properties of the 9-component glass, and thereby to calculate liquidus temperatures. Liquidus temperatures measured by Battelle for 123 CVS glass compositions were used to test the model and to refine the model by the addition of further parameters.« less
A Tephra Database With an Intelligent Correlation System, Mono-Inyo Volcanic Chain, CA
NASA Astrophysics Data System (ADS)
Bursik, M.; Rogova, G.
2004-12-01
We are assembling a web-accessible, relational database of information on past eruptions of the Mono-Inyo volcanic chain, eastern California. The PostgreSQL database structure follows the North American Data Model and CordLink. The database allows us to extract the features diagnostic of particular pyroclastic layers, as well as lava domes and flows. The features include depth in the section, layer thickness and internal stratigraphy, mineral assemblage, major and trace element composition, tephra componentry and granulometry, and radiocarbon age. Our working hypotheses are that 1) the database will prove useful for unraveling the complex recent volcanic history of the Mono-Inyo chain 2) aided by the use of an intelligent correlation system integrated into the database system. The Mono-Inyo chain consists of domes, craters and flows that stretch for 50 km north-south, subparallel to the Sierran range front fault system. Almost all eruptions within the chain probably occurred less than 50,000 years ago. Because of the variety of magma and eruption types, and the migration of source regions in time and space, it is nontrivial to discern patterns of behaviour. We have explored the use of multiple artificial neural networks combined within the framework of the Dempster-Shafer theory of evidence to construct a hybrid information processing system as an aid in the correlation of Mono-Inyo pyroclastic layers. It is hoped that such a system could provide information useful to discerning eruptive patterns that would otherwise be difficult to sort and categorize. In a test case on tephra layers at known sites, the intelligent correlation system was able to categorize observations correctly 96% of the time. In a test case with layers at one unknown site, and using a pairwise comparison of the unknown site with the known sites, a one-to-one correlation between the unknown site and the known sites was found to sometimes be poor. Such a result could be used to aid a stratigrapher in rethinking or questioning a proposed correlation. This rethinking might not happen without the input from the intelligent system.
Rengarajan, A; Drapekin, J; Patel, A; Gyawali, C P
2016-12-01
High-resolution manometry (HRM) utilizes software tools to diagnose esophageal motor disorders. Performance of these software metrics could be affected by averaging and by software characteristics of different manufacturers. High-resolution manometry studies on 86 patients referred for antireflux surgery (61.6 ± 1.4 year, 70% F) and 20 healthy controls (27.9 ± 0.7 year, 45% F) were first subject to standard analysis (Medtronic, Duluth, GA, USA). Coordinates for each of 10 test swallows were exported and averaged to generate a composite swallow. The swallows and averaged composites were imported as ASCII file format into Manoview (Medtronic) and Medical Measurement Systems database reporter (MMS, Dover, NH, USA), and analyses repeated. Comparisons were made between standard and composite swallow interpretations. Correlation between the two systems was high for mean distal contractile integral (DCI, r 2 ≥ 0.9) but lower for integrated relaxation pressure (IRP, r 2 = 0.7). Excluding achalasia, six patients with outflow obstruction (mean IRP 23.2 ± 2.1 with 10-swallow average) were identified by both systems. An additional nine patients (10.5%) were identified as outflow obstruction (15 mmHg threshold) with MMS 10-swallow and four with MMS composite swallow evaluation; only one was confirmed. Ineffective esophageal motility was diagnosed by 10-swallow evaluation in 19 (22.1%) with Manoview, and 20 (23.3%) with MMS. On Manoview composite, 17 had DCI <450 mmHg/cm/s, and on MMS composite, 21, (p ≥ 0.85 for each comparison) but these did not impact diagnostic conclusions. Comparison of 10 swallow and composite swallows demonstrate variability in software metrics between manometry systems. Our data support use of manufacturer specific software metrics on 10-swallow sequences. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Shukla, Adarsh
In a thermodynamic system which contains several elements, the phase relationships among the components are usually very complex. Especially, systems containing oxides are generally very difficult to investigate owing to the very high experimental temperatures and corrosive action of slags. Due to such difficulties, large inconsistencies are often observed among the available experimental data. In order to investigate and understand the complex phase relationships effectively, it is very useful to develop thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of temperature and composition. In a thermodynamic optimization, adjustable model parameters are calculated using, simultaneously, all available thermodynamic and phase-equilibrium data in order to obtain one set of model equations as functions of temperature and composition. Thermodynamic data, such as activities, can aid in the evaluation of the phase diagrams, and information on phase equilibria can be used to deduce thermodynamic properties. Thus, it is frequently possible to resolve discrepancies in the available data. From the model equations, all the thermodynamic properties and phase diagrams can be back-calculated, and interpolations and extrapolations can be made in a thermodynamically correct manner. The data are thereby rendered self-consistent and consistent with thermodynamic principles, and the available data are distilled into a small set of model parameters, ideal for computer storage. As part of a broader research project at the Centre de Recherche en Calcul Thermochimique (CRCT), Ecole Polytechnique to develop a thermodynamic database for multicomponent oxide systems, this thesis deals with the addition of components SrO and BaO to the existing multicomponent database of the SiO2-B2O3-Al2O 3-CaO-MgO system. Over the years, in collaboration with many industrial companies, a thermodynamic database for the SiO2-B2O 3-Al2O3-CaO-MgO system has been built quite satisfactorily. The aim of the present work was to improve the applicability of this five component database by adding SrO and BaO to it. The databases prepared in this work will be of special importance to the glass and steel industries. In the SiO2-B2O3-Al2O 3-CaO-MgO-BaO-SrO system there are 11 binary systems and 25 ternary systems which contain either BaO or SrO or both. For most of these binary systems, and for none of these ternary systems, is there a previous thermodynamic optimization available in the literature. In this thesis, thermodynamic evaluation and optimization for the 11 binary, 17 ternary and 5 quaternary BaO- and SrO- containing systems in the SiO2-B2O3-Al 2O3-CaO-MgO-BaO-SrO system is presented. All these thermodynamic optimizations were performed based on the experimental data available in the literature, except for the SrO-B2O3-SiO2 system. This latter system was optimized on the basis of a few experimental data points generated in the present work together with the data from the literature. In the present work, all the calculations were performed using the FactSage™ thermochemical software. The Modified Quasichemical Model (MQM), which is capable of taking short-range ordering into account, was used for the liquid phase. All the binary systems were critically evaluated and optimized using available phase equilibrium and thermodynamic data. The model parameters obtained as a result of this simultaneous optimization were used to represent the Gibbs energies of all phases as functions of temperature and composition. Optimized binary model parameters were used to estimate the thermodynamic properties of phases in the ternary systems. Proper “geometric” models were used for these estimations. Ternary phase diagram were calculated and compared with available experimental data. Wherever required, ternary interaction parameters were also added. The first part of this thesis comprises a general literature review on the subject of thermodynamic modeling and experimental techniques for phase diagram determination. The next chapters include the literature review and the thermodynamic optimizations of the various systems. The last part of the thesis is the presentation of experiments performed in the present work, by quenching and EPMA, in the SrO-B2O3-SiO2 system. The experiments were designed to generate the maximum amount of information with the minimum number of experiments using the thermodynamic optimization, based only on the data available in the literature, as a guide. These newly-obtained data improved the (preceding) thermodynamic optimization, based on the experimental data in the literature, of this ternary system.
NASA Astrophysics Data System (ADS)
Fontaine, Alain; Sauvage, Bastien; Pétetin, Hervé; Auby, Antoine; Boulanger, Damien; Thouret, Valerie
2016-04-01
Since 1994, the IAGOS program (In-Service Aircraft for a Global Observing System http://www.iagos.org) and its predecessor MOZAIC has produced in-situ measurements of the atmospheric composition during more than 46000 commercial aircraft flights. In order to help analyzing these observations and further understanding the processes driving their evolution, we developed a modelling tool SOFT-IO quantifying their source/receptor link. We improved the methodology used by Stohl et al. (2003), based on the FLEXPART plume dispersion model, to simulate the contributions of anthropogenic and biomass burning emissions from the ECCAD database (http://eccad.aeris-data.fr) to the measured carbon monoxide mixing ratio along each IAGOS flight. Thanks to automated processes, contributions are simulated for the last 20 days before observation, separating individual contributions from the different source regions. The main goal is to supply add-value products to the IAGOS database showing pollutants geographical origin and emission type. Using this information, it may be possible to link trends in the atmospheric composition to changes in the transport pathways and to the evolution of emissions. This tool could be used for statistical validation as well as for inter-comparisons of emission inventories using large amounts of data, as Lagrangian models are able to bring the global scale emissions down to a smaller scale, where they can be directly compared to the in-situ observations from the IAGOS database.
NASA Astrophysics Data System (ADS)
Xu, Huixia; Zhang, Lijun; Cheng, Kaiming; Chen, Weimin; Du, Yong
2017-04-01
To establish an accurate atomic mobility database in solder alloys, a reassessment of atomic mobilities in the fcc (face centered cubic) Cu-Ag-Sn system was performed as reported in the present work. The work entailed initial preparation of three fcc Cu-Sn diffusion couples, which were used to determine the composition-dependent interdiffusivities at 873 K, 923 K, and 973 K, to validate the literature data and provide new experimental data at low temperatures. Then, atomic mobilities in three boundary binaries, fcc Cu-Sn, fcc Ag-Sn, and fcc Cu-Ag, were updated based on the data for various experimental diffusivities obtained from the literature and the present work, together with the available thermodynamic database for solder alloys. Finally, based on the large number of interdiffusivities recently measured from the present authors, atomic mobilities in the fcc Cu-Ag-Sn ternary system were carefully evaluated. A comprehensive comparison between various calculated/model-predicted diffusion properties and the experimental data was used to validate the reliability of the obtained atomic mobilities in ternary fcc Cu-Ag-Sn alloys.
Searching Across the International Space Station Databases
NASA Technical Reports Server (NTRS)
Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana
2007-01-01
Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.
The Protein Information Resource: an integrated public resource of functional annotation of proteins
Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.
2002-01-01
The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247
Nutrient estimation from an FFQ developed for a black Zimbabwean population
Merchant, Anwar T; Dehghan, Mahshid; Chifamba, Jephat; Terera, Getrude; Yusuf, Salim
2005-01-01
Background There is little information in the literature on methods of food composition database development to calculate nutrient intake from food frequency questionnaire (FFQ) data. The aim of this study is to describe the development of an FFQ and a food composition table to calculate nutrient intake in a Black Zimbabwean population. Methods Trained interviewers collected 24-hour dietary recalls (24 hr DR) from high and low income families in urban and rural Zimbabwe. Based on these data and input from local experts we developed an FFQ, containing a list of frequently consumed foods, standard portion sizes, and categories of consumption frequency. We created a food composition table of the foods found in the FFQ so that we could compute nutrient intake. We used the USDA nutrient database as the main resource because it is relatively complete, updated, and easily accessible. To choose the food item in the USDA nutrient database that most closely matched the nutrient content of the local food we referred to a local food composition table. Results Almost all the participants ate sadza (maize porridge) at least 5 times a week, and about half had matemba (fish) and caterpillar more than once a month. Nutrient estimates obtained from the FFQ data by using the USDA and Zimbabwean food composition tables were similar for total energy intake intra class correlation (ICC) = 0.99, and carbohydrate (ICC = 0.99), but different for vitamin A (ICC = 0.53), and total folate (ICC = 0.68). Conclusion We have described a standardized process of FFQ and food composition database development for a Black Zimbabwean population. PMID:16351722
The U.S. Geological Survey coal quality (COALQUAL) database version 3.0
Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.
2015-12-21
Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.
Longitudinal predictors of high school completion.
Barry, Melissa; Reschly, Amy L
2012-06-01
This longitudinal study examined predictors of dropout assessed in elementary school. Student demographic data, achievement, attendance, and ratings of behavior from the Behavior Assessment System for Children were used to predict dropout and completion. Two models, which varied on student sex and race, predicted dropout at rates ranging from 75% to 88%. Model A, which included the Behavioral Symptoms Index, School Problems composite, Iowa Tests of Basic Skills battery, and teacher ratings of student work habits, best predicted female and African American dropouts. Model B, which comprised the Adaptive Skills composite, the Externalizing composite, the School Problems composite, referral for a student support team meeting, and sex, was more accurate for predicting Caucasian dropouts. Both models demonstrated the same hit rates for predicting male dropouts. Recommendations for early warning indicators and linking predictors with interventions are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
The Halophile protein database.
Sharma, Naveen; Farooqi, Mohammad Samir; Chaturvedi, Krishna Kumar; Lal, Shashi Bhushan; Grover, Monendra; Rai, Anil; Pandey, Pankaj
2014-01-01
Halophilic archaea/bacteria adapt to different salt concentration, namely extreme, moderate and low. These type of adaptations may occur as a result of modification of protein structure and other changes in different cell organelles. Thus proteins may play an important role in the adaptation of halophilic archaea/bacteria to saline conditions. The Halophile protein database (HProtDB) is a systematic attempt to document the biochemical and biophysical properties of proteins from halophilic archaea/bacteria which may be involved in adaptation of these organisms to saline conditions. In this database, various physicochemical properties such as molecular weight, theoretical pI, amino acid composition, atomic composition, estimated half-life, instability index, aliphatic index and grand average of hydropathicity (Gravy) have been listed. These physicochemical properties play an important role in identifying the protein structure, bonding pattern and function of the specific proteins. This database is comprehensive, manually curated, non-redundant catalogue of proteins. The database currently contains 59 897 proteins properties extracted from 21 different strains of halophilic archaea/bacteria. The database can be accessed through link. Database URL: http://webapp.cabgrid.res.in/protein/ © The Author(s) 2014. Published by Oxford University Press.
A CMC database for use in the next generation launch vehicles (rockets)
NASA Astrophysics Data System (ADS)
Mahanta, Kamala
1994-10-01
Ceramic matrix composites (CMC's) are being envisioned as the state-of-the-art material capable of handling the tough structural and thermal demands of advanced high temperature structures for programs such as the SSTO (Single Stage to Orbit), HSCT (High Speed Civil Transport), etc. as well as for evolution of the industrial heating systems. Particulate, whisker and continuous fiber ceramic matrix (CFCC) composites have been designed to provide fracture toughness to the advanced ceramic materials which have a high degree of wear resistance, hardness, stiffness, and heat and corrosion resistance but are notorious for their brittleness and sensitivity to microscopic flaws such as cracks, voids and impurity.
A CMC database for use in the next generation launch vehicles (rockets)
NASA Technical Reports Server (NTRS)
Mahanta, Kamala
1994-01-01
Ceramic matrix composites (CMC's) are being envisioned as the state-of-the-art material capable of handling the tough structural and thermal demands of advanced high temperature structures for programs such as the SSTO (Single Stage to Orbit), HSCT (High Speed Civil Transport), etc. as well as for evolution of the industrial heating systems. Particulate, whisker and continuous fiber ceramic matrix (CFCC) composites have been designed to provide fracture toughness to the advanced ceramic materials which have a high degree of wear resistance, hardness, stiffness, and heat and corrosion resistance but are notorious for their brittleness and sensitivity to microscopic flaws such as cracks, voids and impurity.
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
2018-06-01
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari
2002-01-01
The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.
Development of an underwater weighing system for determining body composition.
Patterson, P E; Distel, M
1997-01-01
A system was developed to reduce some of the difficulties associated with hydrostatic (underwater) weighing, specifically the need for complete exhalation and the subjective approximation of weighing scale measurements. The exhalation portion of the weighing protocol is particularly difficult for many disabled individuals and has contributed to the lack of available body composition information for this population. The components of our system include a computer system, load cell, spirometer, breathing tube, logic and signal conditioning circuitry specially constructed for this system, and a software package developed for this project. In a preliminary test, the body fat percentages of fourteen subjects (six males and eight females, ages 21-32 years) were determined both with the standard method and with our system. A correlation of r = 0.967 was found between the two methods, with our system's precision ranging from 1.0 to 1.3 body fat percentage points. The system could be used, for example, in developing a database for monitoring an individual's fitness or for making comparisons between groups (such as athlete to non-athlete).
NASA Technical Reports Server (NTRS)
Stanley, D. C.; Huff, T. L.
2003-01-01
The purpose of this research effort was to: (1) provide a concise and well-defined property profile of current and developing composite materials using thermal and chemical characterization techniques and (2) optimize analytical testing requirements of materials. This effort applied a diverse array of methodologies to ascertain composite material properties. Often, a single method of technique will provide useful, but nonetheless incomplete, information on material composition and/or behavior. To more completely understand and predict material properties, a broad-based analytical approach is required. By developing a database of information comprised of both thermal and chemical properties, material behavior under varying conditions may be better understood. THis is even more important in the aerospace community, where new composite materials and those in the development stage have little reference data. For example, Fourier transform infrared (FTIR) spectroscopy spectral databases available for identification of vapor phase spectra, such as those generated during experiments, generally refer to well-defined chemical compounds. Because this method renders a unique thermal decomposition spectral pattern, even larger, more diverse databases, such as those found in solid and liquid phase FTIR spectroscopy libraries, cannot be used. By combining this and other available methodologies, a database specifically for new materials and materials being developed at Marshall Space Flight Center can be generated . In addition, characterizing materials using this approach will be extremely useful in the verification of materials and identification of anomalies in NASA-wide investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Corman; Krishan Luthra
This report covers work performed under the Continuous Fiber Ceramic Composites (CFCC) program by GE Global Research and its partners from 1994 through 2005. The processing of prepreg-derived, melt infiltrated (MI) composite systems based on monofilament and multifilament tow SiC fibers is described. Extensive mechanical and environmental exposure characterizations were performed on these systems, as well as on competing Ceramic Matrix Composite (CMC) systems. Although current monofilament SiC fibers have inherent oxidative stability limitations due to their carbon surface coatings, the MI CMC system based on multifilament tow (Hi-Nicalon ) proved to have excellent mechanical, thermal and time-dependent properties. Themore » materials database generated from the material testing was used to design turbine hot gas path components, namely the shroud and combustor liner, utilizing the CMC materials. The feasibility of using such MI CMC materials in gas turbine engines was demonstrated via combustion rig testing of turbine shrouds and combustor liners, and through field engine tests of shrouds in a 2MW engine for >1000 hours. A unique combustion test facility was also developed that allowed coupons of the CMC materials to be exposed to high-pressure, high-velocity combustion gas environments for times up to {approx}4000 hours.« less
ERIC Educational Resources Information Center
Codding, Robin S.; Petscher, Yaacov; Truckenmiller, Adrea
2015-01-01
A paucity of research has examined the utility of curriculum-based measurement (CBM) for data-based decision making at the secondary level. As schools move to multitiered systems of service delivery, it is conceivable that multiple screening measures will be used that address various academic subject areas. The value of including different CBM…
PrionHome: a database of prions and other sequences relevant to prion phenomena.
Harbi, Djamel; Parthiban, Marimuthu; Gendoo, Deena M A; Ehsani, Sepehr; Kumar, Manish; Schmitt-Ulms, Gerold; Sowdhamini, Ramanathan; Harrison, Paul M
2012-01-01
Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions), prionoids (i.e., proteins that propagate like prions between individual cells), and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant), and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments). We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic sequences, showing that the only abundant bias pattern is for asparagine bias with subsidiary serine bias. We anticipate that this database will be a useful experimental aid and reference resource. It is freely available at: http://libaio.biol.mcgill.ca/prion.
PrionHome: A Database of Prions and Other Sequences Relevant to Prion Phenomena
Harbi, Djamel; Parthiban, Marimuthu; Gendoo, Deena M. A.; Ehsani, Sepehr; Kumar, Manish; Schmitt-Ulms, Gerold; Sowdhamini, Ramanathan; Harrison, Paul M.
2012-01-01
Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions), prionoids (i.e., proteins that propagate like prions between individual cells), and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant), and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments). We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic sequences, showing that the only abundant bias pattern is for asparagine bias with subsidiary serine bias. We anticipate that this database will be a useful experimental aid and reference resource. It is freely available at: http://libaio.biol.mcgill.ca/prion. PMID:22363733
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain
2016-04-01
IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.
NASA Astrophysics Data System (ADS)
Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.
2013-12-01
The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.
Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.
Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas
2017-07-24
Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.
WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’
NASA Astrophysics Data System (ADS)
Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.
2009-12-01
The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuszak, M; Anderson, C; Lee, C
Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANDELL, JOHN F.; SAMBORSKY, DANIEL D.; CAIRNS, DOUGLAS
This report presents the major findings of the Montana State University Composite Materials Fatigue Program from 1997 to 2001, and is intended to be used in conjunction with the DOE/MSU Composite Materials Fatigue Database. Additions of greatest interest to the database in this time period include environmental and time under load effects for various resin systems; large tow carbon fiber laminates and glass/carbon hybrids; new reinforcement architectures varying from large strands to prepreg with well-dispersed fibers; spectrum loading and cumulative damage laws; giga-cycle testing of strands; tough resins for improved structural integrity; static and fatigue data for interply delamination; andmore » design knockdown factors due to flaws and structural details as well as time under load and environmental conditions. The origins of a transition to increased tensile fatigue sensitivity with increasing fiber content are explored in detail for typical stranded reinforcing fabrics. The second focus of the report is on structural details which are prone to delamination failure, including ply terminations, skin-stiffener intersections, and sandwich panel terminations. Finite element based methodologies for predicting delamination initiation and growth in structural details are developed and validated, and simplified design recommendations are presented.« less
Creation of Norms for the Purpose of Global Talent Management
ERIC Educational Resources Information Center
Hedricks, Cynthia A.; Robie, Chet; Harnisher, John V.
2008-01-01
Personality scores were used to construct three databases of global norms. The composition of the three databases varied according to percentage of cases by global region, occupational group, applicant status, and gender of the job candidate. Comparison of personality scores across the three norms databases revealed that the magnitude of the…
Padliya, Neerav D; Garrett, Wesley M; Campbell, Kimberly B; Tabb, David L; Cooper, Bret
2007-11-01
LC-MS/MS has demonstrated potential for detecting plant pathogens. Unlike PCR or ELISA, LC-MS/MS does not require pathogen-specific reagents for the detection of pathogen-specific proteins and peptides. However, the MS/MS approach we and others have explored does require a protein sequence reference database and database-search software to interpret tandem mass spectra. To evaluate the limitations of database composition on pathogen identification, we analyzed proteins from cultured Ustilago maydis, Phytophthora sojae, Fusarium graminearum, and Rhizoctonia solani by LC-MS/MS. When the search database did not contain sequences for a target pathogen, or contained sequences to related pathogens, target pathogen spectra were reliably matched to protein sequences from nontarget organisms, giving an illusion that proteins from nontarget organisms were identified. Our analysis demonstrates that when database-search software is used as part of the identification process, a paradox exists whereby additional sequences needed to detect a wide variety of possible organisms may lead to more cross-species protein matches and misidentification of pathogens.
NASA Astrophysics Data System (ADS)
Hidayat, Taufiq; Shishin, Denis; Decterov, Sergei A.; Hayes, Peter C.; Jak, Evgueni
2017-01-01
Uncertainty in the metal price and competition between producers mean that the daily operation of a smelter needs to target high recovery of valuable elements at low operating cost. Options for the improvement of the plant operation can be examined and decision making can be informed based on accurate information from laboratory experimentation coupled with predictions using advanced thermodynamic models. Integrated high-temperature experimental and thermodynamic modelling research on phase equilibria and thermodynamics of copper-containing systems have been undertaken at the Pyrometallurgy Innovation Centre (PYROSEARCH). The experimental phase equilibria studies involve high-temperature equilibration, rapid quenching and direct measurement of phase compositions using electron probe X-ray microanalysis (EPMA). The thermodynamic modelling deals with the development of accurate thermodynamic database built through critical evaluation of experimental data, selection of solution models, and optimization of models parameters. The database covers the Al-Ca-Cu-Fe-Mg-O-S-Si chemical system. The gas, slag, matte, liquid and solid metal phases, spinel solid solution as well as numerous solid oxide and sulphide phases are included. The database works within the FactSage software environment. Examples of phase equilibria data and thermodynamic models of selected systems, as well as possible implementation of the research outcomes to selected copper making processes are presented.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David; Johnson, Kenneth
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Astrophysics Data System (ADS)
Powell, C. J.; Jablonski, A.; Werner, W. S. M.; Smekal, W.
2005-01-01
We describe two NIST databases that can be used to characterize thin films from Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS) measurements. First, the NIST Electron Effective-Attenuation-Length Database provides values of effective attenuation lengths (EALs) for user-specified materials and measurement conditions. The EALs differ from the corresponding inelastic mean free paths on account of elastic-scattering of the signal electrons. The database supplies "practical" EALs that can be used to determine overlayer-film thicknesses. Practical EALs are plotted as a function of film thickness, and an average value is shown for a user-selected thickness. The average practical EAL can be utilized as the "lambda parameter" to obtain film thicknesses from simple equations in which the effects of elastic-scattering are neglected. A single average practical EAL can generally be employed for a useful range of film thicknesses and for electron emission angles of up to about 60°. For larger emission angles, the practical EAL should be found for the particular conditions. Second, we describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA) to be released in 2004. This database provides data for many parameters needed in quantitative AES and XPS (e.g., excitation cross-sections, electron-scattering cross-sections, lineshapes, fluorescence yields, and backscattering factors). Relevant data for a user-specified experiment are automatically retrieved by a small expert system. In addition, Auger electron and photoelectron spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra, and thus, provide more detailed characterizations of multilayer thin-film materials. SESSA can also provide practical EALs, and we compare values provided by the NIST EAL database and SESSA for hafnium dioxide. Differences of up to 10% were found for film thicknesses less than 20 Å due to the use of different physical models in each database.
Dionysopoulos, Dimitrios
2016-01-01
This study aimed to systematically review the literature for the effect of digluconate chlorhexidine (CHX) on bond strength between dental adhesive systems and dentin of composite restorations. The electronic databases that were searched to identify manuscripts for inclusion were Medline via PubMed and Google search engine. The search strategies were computer search of the database and review of reference lists of the related articles. Search words/terms were as follows: (digluconate chlorhexidine*) AND (dentin* OR adhesive system* OR bond strength*). Bond strength reduction after CHX treatments varied among the studies, ranging 0-84.9%. In most of the studies, pretreatment CHX exhibited lower bond strength reduction than the control experimental groups. Researchers who previously investigated the effect of CHX on the bond strength of dental adhesive systems on dentin have reported contrary results, which may be attributed to different experimental methods, different designs of the experiments, and different materials investigated. Further investigations, in particular clinical studies, would be necessary to clarify the effect of CHX on the longevity of dentin bonds.
A review on mode-I interlaminar fracture toughness of fibre reinforced composites
NASA Astrophysics Data System (ADS)
Nasuha, N.; Azmi, A. I.; Tan, C. L.
2017-10-01
Composite material has been growing rapidly throughout the year for its unique properties in comparisons with metal. Recently, there has been a growth on studying the way to reduce the delamination failure, which is the primary challenge on laminated fibre composite. This failure can degrade the strength of composite materials, hence loses its function. In this review, database search was performed using the keywords search on “interlaminar fracture toughness”, “double cantilever beam”, “delamination resistance” and “Mode-I GIC”. The searches were performed on Google Scholar, Scopus and Web of Science with further cross-referencing with other databases. Most relevant studies were selected for review and referencing by the author. This review paper gives a brief explanation on Mode-I interlaminar fracture toughness of composite material. This fracture mode is the most common modes on studying the delamination failure.
NASA Technical Reports Server (NTRS)
Bao, Han P.
1995-01-01
Fabricating primary aircraft and spacecraft structures using advanced composite materials entail both benefits and risks. The benefits come from much improved strength-to-weight ratios and stiffness-to-weight ratios, potential for less part count, ability to tailor properties, chemical and solvent resistance, and superior thermal properties. On the other hand, the risks involved include high material costs, lack of processing experience, expensive labor, poor reproducibility, high toxicity for some composites, and a variety of space induced risks. The purpose of this project is to generate a manufacturing database for a selected number of materials with potential for space applications, and to rely on this database to develop quantitative approaches to screen candidate materials and processes for space applications on the basis of their manufacturing risks including costs. So far, the following materials have been included in the database: epoxies, polycyanates, bismalemides, PMR-15, polyphenylene sulfides, polyetherimides, polyetheretherketone, and aluminum lithium. The first four materials are thermoset composites; the next three are thermoplastic composites, and the last one is is a metal. The emphasis of this database is on factors affecting manufacturing such as cost of raw material, handling aspects which include working life and shelf life of resins, process temperature, chemical/solvent resistance, moisture resistance, damage tolerance, toxicity, outgassing, thermal cycling, and void content, nature or type of process, associate tooling, and in-process quality assurance. Based on industry experience and published literature, a relative ranking was established for each of the factors affecting manufacturing as listed above. Potential applications of this database include the determination of a delta cost factor for specific structures with a given process plan and a general methodology to screen materials and processes for incorporation into the current conceptual design optimization of future spacecrafts as being coordinated by the Vehicle Analysis Branch where this research is being conducted.
A Non-Arrhenian Viscosity Model for Natural Silicate Melts with Applications to Volcanology
NASA Astrophysics Data System (ADS)
Russell, J. K.; Giordano, D.; Dingwell, D. B.
2005-12-01
Silicate melt viscosity is the most important physical property in volcanic systems. It governs styles and rates of flow, velocity distributions in flowing magma, rates of vesiculation, and, ultimately, sets limits on coherent(vs. fragmented or disrupted) flow. The prediction of melt viscosity over the range of conditions found on terrestrial planets remains a challenge. However, the extraordinary increase in number and quality of published measurements of melt viscosity suggests the possibility of new models. Here we review the attributes of previous models for silicate melt viscosity and, then, present a new predictive model natural silicate melts. The importance of silicate melt viscosity was recognized early [1] and culminated in 2 models for predicting silicate melt viscosity [2,3]. These models used an Arrhenian T-dependence; they were limited by a limited experimental database dominated by high-T measurements. Subsequent models have aimed to: i) extend the compositional range of Arrhenian T-dependent models [4,5]; ii) to develop non-Arrhenian models for limited ranges of composition [6,7,8], iii) to develop new strategies for modelling the composition and T-dependence of viscosity [9,10,11], and, finally, to create chemical models for the non-Arrhenian T-dependence of natural melts [12]. We present a multicomponent model for the compositional and T dependence of silicate melt viscosity based on data spanning a wide range of anhydrous melt compositions. The experimental data include micropenetration and concentric cylinder viscometry measurements covering a viscosity range of 10-1 to 1012 Pa s and a T-range from 700 to 1650°C. These published data provide a high- quality database comprising ~ 800 experimental data on 44 well-characterized melt compositions. Our model uses the Adam-Gibbs equation to capture T-dependence: log η = A + B/[T · log (T/C)] where A, B, and C are adjustable parameters that vary for different melt compositions. We assume that all silicate melts converge to a common, but unknown, high-T limit (e.g., A) and that all compositional dependence is accommodated for by B and C. We adopt a linear compositional dependence for B and C: B = σi=1..n [xi βi] C = σi=1..n [xi γi] where xi's are the mole fractions of oxide components (n=8) and βi and γi are adjustable parameters. The model, therefore, comprises 2 · n+1 adjustable parameters which are optimized for against the experimental database including a common value of A and compositional coefficeints for B and C. The new model reproduces the original database to within experimental uncertainty and can predict the viscosity of silicate melts across the full range of conditions found in Nature. References Cited: [1] Friedman et al., 1963. J Geophys Res 68, 6523-6535. [2] Bottinga Y & Weill D 1972. Am J Sci 272, 438- 475. [3] Shaw HR 1972. Am J Sci 272, 438- 475. [4] Persikov ES 1991. Adv Phys Geochem 9, 1-40. [5] Prusevich AA 1988. Geol Geofiz 29, 67-69. [6] Baker DR 1996. Am Min 81, 126-134. [7] Hess KU & Dingwell DB 1996. Am Min 81, 1297- 1300. [8] Zhang, et al. 2003. Am min 88, 1741- 1752. [9] Russell et al. 2002. Eur J Min 14, 417-428. [10] Russell et al. 2003. Am Min 8, 1390- 1394. [11] Russell JK & Giordano D In Press. Geochim Cosmochim Acta. [12] Giordano D & Dingwell DB 2003. Earth Planet. Sci. Lett. 208, 337-349.
Identification of "Known Unknowns" Utilizing Accurate Mass Data and ChemSpider
NASA Astrophysics Data System (ADS)
Little, James L.; Williams, Antony J.; Pshenichnov, Alexey; Tkachenko, Valery
2012-01-01
In many cases, an unknown to an investigator is actually known in the chemical literature, a reference database, or an internet resource. We refer to these types of compounds as "known unknowns." ChemSpider is a very valuable internet database of known compounds useful in the identification of these types of compounds in commercial, environmental, forensic, and natural product samples. The database contains over 26 million entries from hundreds of data sources and is provided as a free resource to the community. Accurate mass mass spectrometry data is used to query the database by either elemental composition or a monoisotopic mass. Searching by elemental composition is the preferred approach. However, it is often difficult to determine a unique elemental composition for compounds with molecular weights greater than 600 Da. In these cases, searching by the monoisotopic mass is advantageous. In either case, the search results are refined by sorting the number of references associated with each compound in descending order. This raises the most useful candidates to the top of the list for further evaluation. These approaches were shown to be successful in identifying "known unknowns" noted in our laboratory and for compounds of interest to others.
Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data
Loveland, Thomas R.; Reed, B.C.; Brown, Jesslyn F.; Ohlen, D.O.; Zhu, Z.; Yang, L.; Merchant, J.W.
2000-01-01
Researchers from the U.S. Geological Survey, University of Nebraska-Lincoln and the European Commission's Joint Research Centre, Ispra, Italy produced a 1 km resolution global land cover characteristics database for use in a wide range of continental-to global-scale environmental studies. This database provides a unique view of the broad patterns of the biogeographical and ecoclimatic diversity of the global land surface, and presents a detailed interpretation of the extent of human development. The project was carried out as an International Geosphere-Biosphere Programme, Data and Information Systems (IGBP-DIS) initiative. The IGBP DISCover global land cover product is an integral component of the global land cover database. DISCover includes 17 general land cover classes defined to meet the needs of IGBP core science projects. A formal accuracy assessment of the DISCover data layer will be completed in 1998. The 1 km global land cover database was developed through a continent-by-continent unsupervised classification of 1 km monthly Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) composites covering 1992-1993. Extensive post-classification stratification was necessary to resolve spectral/temporal confusion between disparate land cover types. The complete global database consists of 961 seasonal land cover regions that capture patterns of land cover, seasonality and relative primary productivity. The seasonal land cover regions were aggregated to produce seven separate land cover data sets used for global environmental modelling and assessment. The data sets include IGBP DISCover, U.S. Geological Survey Anderson System, Simple Biosphere Model, Simple Biosphere Model 2, Biosphere-Atmosphere Transfer Scheme, Olson Ecosystems and Running Global Remote Sensing Land Cover. The database also includes all digital sources that were used in the classification. The complete database can be sourced from the website: http://edcwww.cr.usgs.gov/landdaac/glcc/glcc.html.
Global Play Evaluation TOol (GPETO) assists Mobil explorationists with play evaluation and ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Withers, K.D.; Brown, P.J.; Clary, R.C.
1996-01-01
GPETO is a relational database and application containing information about over 2500 plays around the world. It also has information about approximately 30,000 fields and the related provinces. The GPETO application has been developed to assist Mobil geoscientists, planners and managers with global play evaluations and portfolio management. The, main features of GPETO allow users to: (1) view or modify play and province information, (2) composite user specified plays in a statistically valid way, (3) view threshold information for plays and provinces, including curves, (4) examine field size data, including discovered, future and ultimate field sizes for provinces and plays,more » (5) use a database browser to lookup and validate data by geographic, volumetric, technical and business criteria, (6) display ranged values and graphical displays of future and ultimate potential for plays, provinces, countries, and continents, (7) run, view and print a number of informative reports containing input and output data from the system. The GPETO application is written in c and fortran, runs on a unix based system, utilizes an Ingres database, and was implemented using a 3-tiered client/server architecture.« less
Global Play Evaluation TOol (GPETO) assists Mobil explorationists with play evaluation and ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Withers, K.D.; Brown, P.J.; Clary, R.C.
1996-12-31
GPETO is a relational database and application containing information about over 2500 plays around the world. It also has information about approximately 30,000 fields and the related provinces. The GPETO application has been developed to assist Mobil geoscientists, planners and managers with global play evaluations and portfolio management. The, main features of GPETO allow users to: (1) view or modify play and province information, (2) composite user specified plays in a statistically valid way, (3) view threshold information for plays and provinces, including curves, (4) examine field size data, including discovered, future and ultimate field sizes for provinces and plays,more » (5) use a database browser to lookup and validate data by geographic, volumetric, technical and business criteria, (6) display ranged values and graphical displays of future and ultimate potential for plays, provinces, countries, and continents, (7) run, view and print a number of informative reports containing input and output data from the system. The GPETO application is written in c and fortran, runs on a unix based system, utilizes an Ingres database, and was implemented using a 3-tiered client/server architecture.« less
Surgical research using national databases
Leland, Hyuma; Heckmann, Nathanael
2016-01-01
Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945
Surgical research using national databases.
Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael
2016-10-01
Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.
Review of food composition data for edible insects.
Nowak, Verena; Persijn, Diedelinde; Rittenschober, Doris; Charrondiere, U Ruth
2016-02-15
Edible insects are considered rich in protein and a variety of micronutrients, and are therefore seen as potential contributors to food security. However, the estimation of the insects' contribution to the nutrient intake is limited since data are absent in food composition tables and databases. Therefore, FAO/INFOODS collected and published analytical data from primary sources with sufficient quality in the Food Composition Database for Biodiversity (BioFoodComp). Data were compiled for 456 food entries on insects in different developmental stages. A total of 5734 data points were entered, most on minerals and trace elements (34.8%), proximates (24.5%), amino acids (15.3%) and (pro)vitamins (9.1%). Data analysis of Tenebrio molitor confirms its nutritive quality that can help to combat malnutrition. The collection of data will assist compilers to incorporate more insects into tables and databases, and to further improve nutrient intake estimations. Copyright © 2015 Food and Agriculture Organization of the United Nations. Published by Elsevier Ltd.. All rights reserved.
Menezes, Elizabete Wenzel de; Grande, Fernanda; Giuntini, Eliana Bistriche; Lopes, Tássia do Vale Cardoso; Dan, Milana Cara Tanasov; Prado, Samira Bernardino Ramos do; Franco, Bernadette Dora Gombossy de Melo; Charrondière, U Ruth; Lajolo, Franco Maria
2016-02-15
Dietary fiber (DF) contributes to the energy value of foods and including it in the calculation of total food energy has been recommended for food composition databases. The present study aimed to investigate the impact of including energy provided by the DF fermentation in the calculation of food energy. Total energy values of 1753 foods from the Brazilian Food Composition Database were calculated with or without the inclusion of DF energy. The energy values were compared, through the use of percentage difference (D%), in individual foods and in daily menus. Appreciable energy D% (⩾10) was observed in 321 foods, mainly in the group of vegetables, legumes and fruits. However, in the Brazilian typical menus containing foods from all groups, only D%<3 was observed. In mixed diets, the DF energy may cause slight variations in total energy; on the other hand, there is appreciable energy D% for certain foods, when individually considered. Copyright © 2015 Elsevier Ltd. All rights reserved.
Haytowitz, David B; Pehrsson, Pamela R
2018-01-01
For nearly 20years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture's (USDA) food composition databases (FCDB) through the collection and analysis of nationally representative food samples. NFNAP employs statistically valid sampling plans, the Key Foods approach to identify and prioritize foods and nutrients, comprehensive quality control protocols, and analytical oversight to generate new and updated analytical data for food components. NFNAP has allowed the Nutrient Data Laboratory to keep up with the dynamic US food supply and emerging scientific research. Recently generated results for nationally representative food samples show marked changes compared to previous database values for selected nutrients. Monitoring changes in the composition of foods is critical in keeping FCDB up-to-date, so that they remain a vital tool in assessing the nutrient intake of national populations, as well as for providing dietary advice. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Poppe, Sam; Barette, Florian; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu
2016-04-01
The Virunga Volcanic Province (VVP) is situated within the western branch of the East-African Rift. The geochemistry and petrology of its' volcanic products has been studied extensively in a fragmented manner. They represent a unique collection of silica-undersaturated, ultra-alkaline and ultra-potassic compositions, displaying marked geochemical variations over the area occupied by the VVP. We present a novel spatially-explicit database of existing whole-rock geochemical analyses of the VVP volcanics, compiled from international publications, (post-)colonial scientific reports and PhD theses. In the database, a total of 703 geochemical analyses of whole-rock samples collected from the 1950s until recently have been characterised with a geographical location, eruption source location, analytical results and uncertainty estimates for each of these categories. Comparative box plots and Kruskal-Wallis H tests on subsets of analyses with contrasting ages or analytical methods suggest that the overall database accuracy is consistent. We demonstrate how statistical techniques such as Principal Component Analysis (PCA) and subsequent cluster analysis allow the identification of clusters of samples with similar major-element compositions. The spatial patterns represented by the contrasting clusters show that both the historically active volcanoes represent compositional clusters which can be identified based on their contrasted silica and alkali contents. Furthermore, two sample clusters are interpreted to represent the most primitive, deep magma source within the VVP, different from the shallow magma reservoirs that feed the eight dominant large volcanoes. The samples from these two clusters systematically originate from locations which 1. are distal compared to the eight large volcanoes and 2. mostly coincide with the surface expressions of rift faults or NE-SW-oriented inherited Precambrian structures which were reactivated during rifting. The lava from the Mugogo eruption of 1957 belongs to these primitive clusters and is the only known to have erupted outside the current rift valley in historical times. We thus infer there is a distributed hazard of vent opening susceptibility additional to the susceptibility associated with the main Virunga edifices. This study suggests that the statistical analysis of such geochemical database may help to understand complex volcanic plumbing systems and the spatial distribution of volcanic hazards in active and poorly known volcanic areas such as the Virunga Volcanic Province.
Sherrod, David R.; Keith, Mackenzie K.
2018-03-30
A substantial part of the U.S. Pacific Northwest is underlain by Cenozoic volcanic and continental sedimentary rocks and, where widespread, these strata form important aquifers. The legacy geologic mapping presented with this report contains new thematic categorization added to state digital compilations published by the U.S. Geological Survey for Oregon, California, Idaho, Nevada, Utah, and Washington (Ludington and others, 2005). Our additional coding is designed to allow rapid characterization, mainly for hydrogeologic purposes, of similar rocks and deposits within a boundary expanded slightly beyond that of the Pacific Northwest Volcanic Aquifer System study area. To be useful for hydrogeologic analysis and to be more statistically manageable, statewide compilations from Ludington and others (2005) were mosaicked into a regional map and then reinterpreted into four main categories on the basis of (1) age, (2) composition, (3) hydrogeologic grouping, and (4) lithologic pattern. The coding scheme emphasizes Cenozoic volcanic or volcanic-related rocks and deposits, and of primary interest are the codings for composition and age.
A network pharmacology study of Sendeng-4, a Mongolian medicine.
Zi, Tian; Yu, Dong
2015-02-01
We collected the data on the Sendeng-4 chemical composition corresponding targets through the literature and from DrugBank, SuperTarget, TTD (Therapeutic Targets Database) and other databases and the relevant signaling pathways from the KEGG (Kyoto Encyclopedia of Genes and Genomes) database and established models of the chemical composition-target network and chemical composition-target-disease network using Cytoscape software, the analysis indicated that the chemical composition had at least nine different types of targets that acted together to exert effects on the diseases, suggesting a "multi-component, multi-target" feature of the traditional Mongolian medicine. We also employed the rat model of rheumatoid arthritis induced by Collgen Type II to validate the key targets of the chemical components of Sendeng-4, and three of the key targets were validated through laboratory experiments, further confirming the anti-inflammatory effects of Sendeng-4. In all, this study predicted the active ingredients and targets of Sendeng-4, and explored its mechanism of action, which provided new strategies and methods for further research and development of Sendeng-4 and other traditional Mongolian medicines as well. Copyright © 2015 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
The IAGOS Information System: From the aircraft measurements to the users.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Thouret, Valérie; Cammas, Jean-Pierre; Petzold, Andreas; Volz-Thomas, Andreas; Gerbig, Christoph; Brenninkmeijer, Carl A. M.
2013-04-01
IAGOS (In-service Aircraft for a Global Observing System, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in-situ observations of atmospheric chemical composition throughout the troposphere and in the UTLS. It builds on almost 20 years of scientific and technological expertise gained in the research projects MOZAIC (Measurement of Ozone and Water Vapour on Airbus In-service Aircraft) and CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container). The European consortium includes research centres, universities, national weather services, airline operators and aviation industry. IAGOS consists of two complementary building blocks proving a unique global observation system: IAGOS-CORE deploys newly developed instrumentation for regular in-situ measurements of atmospheric chemical species both reactive and greenhouse gases (O3, CO, NOx, NOy, H2O, CO2, CH4), aerosols and cloud particles. In IAGOS-CARIBIC a cargo container is deployed monthly as a flying laboratory aboard one aircraft. Involved airlines ensure global operation of the network. Today, 5 aircraft are flying with the MOZAIC (3) or IAGOS-CORE (2) instrumentation namely 3 aircraft from Lufthansa, 1 from Air Namibia, and 1 from China Airlines Taiwan. A main improvement and new aspect of the IAGOS-CORE instrumentation compared to MOZAIC is to deliver the raw data in near real time (i.e. as soon as the aircraft lands data are transmitted). After a first and quick validation of the O3 and CO measurements, preliminary data are made available in the central database for both the MACC project (Monitoring Atmospheric Composition and Climate) and scientific research groups. In addition to recorded measurements, the database also contains added-value products such as meteorological information (tropopause height, air mass backtrajectories) and lagrangian model outputs (FLEXPART). Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web site: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The MOZAIC-IAGOS database contains today more than 35000 flights covering mostly the northern hemisphere mid-latitudes but with reduced representation of the Pacific region. The recently equipped China Airlines Taiwan aircraft started in July 2012 filling this gap. Future equipped aircraft scheduled in 2013 from Air France, Cathay Pacific and Iberia will cover the Asia-Oceania sector and Europe-South America transects. The database, as well as the research infrastructure itself are in continuous development and improvement. In the framework of the new starting IGAS project (IAGOS for GMES Atmospheric Service), major achievements will be reached such as metadata and formats standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC data integration within the central database, and the real-time data transmission.
New DMSP database of precipitating auroral electrons and ions
NASA Astrophysics Data System (ADS)
Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.
2017-08-01
Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.
Lavine, Barry K; White, Collin G; Allen, Matthew D; Weakley, Andrew
2017-03-01
Multilayered automotive paint fragments, which are one of the most complex materials encountered in the forensic science laboratory, provide crucial links in criminal investigations and prosecutions. To determine the origin of these paint fragments, forensic automotive paint examiners have turned to the paint data query (PDQ) database, which allows the forensic examiner to compare the layer sequence and color, texture, and composition of the sample to paint systems of the original equipment manufacturer (OEM). However, modern automotive paints have a thin color coat and this layer on a microscopic fragment is often too thin to obtain accurate chemical and topcoat color information. A search engine has been developed for the infrared (IR) spectral libraries of the PDQ database in an effort to improve discrimination capability and permit quantification of discrimination power for OEM automotive paint comparisons. The similarity of IR spectra of the corresponding layers of various records for original finishes in the PDQ database often results in poor discrimination using commercial library search algorithms. A pattern recognition approach employing pre-filters and a cross-correlation library search algorithm that performs both a forward and backward search has been used to significantly improve the discrimination of IR spectra in the PDQ database and thus improve the accuracy of the search. This improvement permits inter-comparison of OEM automotive paint layer systems using the IR spectra alone. Such information can serve to quantify the discrimination power of the original automotive paint encountered in casework and further efforts to succinctly communicate trace evidence to the courts.
Total choline and choline-containing moieties of commercially available pulses.
Lewis, Erin D; Kosik, Sarah J; Zhao, Yuan-Yuan; Jacobs, René L; Curtis, Jonathan M; Field, Catherine J
2014-06-01
Estimating dietary choline intake can be challenging due to missing foods in the current United States Department of Agriculture (USDA) database. The objectives of the study were to quantify the choline-containing moieties and the total choline content of a variety of pulses available in North America and use the expanded compositional database to determine the potential contribution of pulses to dietary choline intake. Commonly consumed pulses (n = 32) were analyzed by hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) and compared to the current USDA database. Cooking was found to reduce the relative percent from free choline and increased the contribution of phosphatidylcholine to total choline for most pulses (P < 0.05). Using the expanded database to estimate choline content of recipes using pulses as meat alternatives, resulted in a different estimation of choline content per serving (±30%), compared to the USDA database. These results suggest that when pulses are a large part of a meal or diet, the use of accurate food composition data should be used.
Development of a material property database on selected ceramic matrix composite materials
NASA Technical Reports Server (NTRS)
Mahanta, Kamala
1996-01-01
Ceramic Matrix Composites, with fiber/whisker/particulate reinforcement, possess the attractive properties of ceramics such as high melting temperature, high strength and stiffness at high temperature, low density, excellent environmental resistance, combined with improved toughness and mechanical reliability. These unique properties have made these composites an enabling technology for thermomechanically demanding applications in high temperature, high stress and aggressive environments. On a broader scale, CMC's are anticipated to be applicable in aircraft propulsion, space propulsion, power and structures, in addition to ground based applications. However, it is also true that for any serious commitment of the material toward any of the intended critical thermo-mechanical applications to materialize, vigorous research has to be conducted for a thorough understanding of the mechanical and thermal behavior of CMC's. The high technology of CMC'S is far from being mature. In view of this growing need for CMC data, researchers all over the world have found themselves drawn into the characterization of CMC's such as C/SiC, SiC/SiC, SiC/Al203, SiC/Glass, SiC/C, SiC/Blackglas. A significant amount of data has been generated by the industries, national laboratories and educational institutions in the United States of America. NASA/Marshall Space Flight Center intends to collect the 'pedigreed' CMC data and store those in a CMC database within MAPTIS (Materials and Processes Technical Information System). The task of compilation of the CMC database is a monumental one and requires efforts in various directions. The project started in the form of a summer faculty fellowship in 1994 and has spilled into the months that followed and into the summer faculty fellowship of 1995 and has the prospect of continuing into the future for a healthy growth, which of course depends to a large extent on how fast CMC data are generated. The 10-week long summer fellowship has concentrated, basically, on establishing the procedure for a smooth transfer of data into a CMC database on MAPTIS which is a vital part of the following broader picture of the project.
Monsoon control on faunal composition of planktic foraminifera in the Arabian Sea
NASA Astrophysics Data System (ADS)
Munz, P.; Siccha, M.; Kucera, M.; Schulz, H.
2013-12-01
Being among the most productive open ocean basins, sea surface properties in the Arabian Sea are highly influenced by the seasonal reversal of the monsoonal wind system. During boreal summer wind direction from the southwest induces strong upwelling along the coast off Somalia and Oman. Vertical transport of cold and nutrient-rich deep-water masses by Ekman pumping reduces sea surface temperature and triggers primary productivity. Reversed cold and dry winds during boreal winter lead to cooling of the surface- and subsurface-waters and hereby to deep convective mixing, bringing nutrients into the photic zone and enhancing primary productivity especially in the northern part of the Arabian Sea. Here, we study the influence of the different seasonal monsoon systems on the faunal composition of planktic foraminifera, in order to improve our understanding how the faunal community record is influenced by the respective monsoon systems and to provide baseline information for the reconstruction of ancient monsoon conditions. We used published core-top foraminiferal databases, significantly increased in spatial coverage by new contributions. The resulting combined database consists of 413 core-top samples spanning the Arabian Sea and the Northern Indian Ocean to 10° S. The seasonal sea surface properties at these stations could be binned into categories of different monsoon influence, based on satellite-derived chlorophyll-a concentrations. Interpretation of species response to environmental control is based on multivariate statistical analyses of each of the categorical bins. First results show that samples influenced only by winter- and summer monsoon conditions, respectively, feature specifiable faunal composition. Globigerina bulloides is mostly associated with summer upwelling conditions, whereas Globigerina falconensis and Pulleniatina obliquiloculata are typical species of winter conditions. Redundancy analysis reveals preferences of species populations with respect to particular environmental gradients and may help to disentangle winter- from summer monsoon impact on modern and fossil faunas.
A new comprehensive database of global volcanic gas analyses
NASA Astrophysics Data System (ADS)
Clor, L. E.; Fischer, T. P.; Lehnert, K. A.; McCormick, B.; Hauri, E. H.
2013-12-01
Volcanic volatiles are the driving force behind eruptions, powerful indicators of magma provenance, present localized hazards, and have implications for climate. Studies of volcanic emissions are necessary for understanding volatile cycling from the mantle to the atmosphere. Gas compositions vary with volcanic activity, making it important to track their chemical variability over time. As studies become increasingly interdisciplinary, it is critical to have a mechanism to integrate decades of gas studies across disciplines. Despite the value of this research to a variety of fields, there is currently no integrated network to house all volcanic and hydrothermal gas data, making spatial, temporal, and interdisciplinary comparison studies time-consuming. To remedy this, we are working to establish a comprehensive database of volcanic gas emissions and compositions worldwide, as part of the Deep Carbon Observatory's DECADE (Deep Carbon Degassing) initiative. Volcanic gas data have been divided into two broad categories: 1) chemical analyses from samples collected directly at the volcanic source, and 2) measurements of gas concentrations and fluxes, such as remotely by mini-DOAS or satellite, or in-plume such as by multiGAS. The gas flux database effort is realized by the Global Volcanism Program of the Smithsonian Institution (abstract by Brendan McCormick, this meeting). The direct-sampling data is the subject of this presentation. Data from direct techniques include samples of gases collected at the volcanic source from fumaroles and springs, tephras analyzed for gas contents, filter pack samples of gases collected in a plume, and any other data types that involve collection of a sample. Data are incorporated into the existing framework of the Petrological Database, PetDB. Association with PetDB is advantageous as it will allow volcanic gas data to be linked to chemical data from lava or tephra samples, forming more complete ties between the eruptive products and the source magma. Eventually our goal is to have a seamless gas database that allows the user to easily access all gas data ever collected at volcanoes. This database will be useful in a variety of science applications: 1) correlating volcanic gas composition to volcanic activity; 2) establishing a characteristic gas composition or total volatile budget for a volcano or region in studies of global chemical cycles; 3) better quantifying the flux and source of volcanic carbon to the atmosphere. The World Organization of Volcano Observatories is populating a volcano monitoring database, WOVOdat, which centers on data collected during times of volcanic unrest for monitoring and hazard purposes. The focus of our database is to gain insight into volcanic degassing specifically, during both eruptive and quiescent times. Coordination of the new database with WOVOdat will allow comparison studies of gas compositions with seismic and other monitoring data during times of unrest, as well as promote comprehensive and cross-disciplinary questions about volcanic degassing.
National Institute of Standards and Technology Data Gateway
SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase) This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.
Structural Validation of a French Food Frequency Questionnaire of 94 Items.
Gazan, Rozenn; Vieux, Florent; Darmon, Nicole; Maillot, Matthieu
2017-01-01
Food frequency questionnaires (FFQs) are used to estimate the usual food and nutrient intakes over a period of time. Such estimates can suffer from measurement errors, either due to bias induced by respondent's answers or to errors induced by the structure of the questionnaire (e.g., using a limited number of food items and an aggregated food database with average portion sizes). The "structural validation" presented in this study aims to isolate and quantify the impact of the inherent structure of a FFQ on the estimation of food and nutrient intakes, independently of respondent's perception of the questionnaire. A semi-quantitative FFQ ( n = 94 items, including 50 items with questions on portion sizes) and an associated aggregated food composition database (named the item-composition database) were developed, based on the self-reported weekly dietary records of 1918 adults (18-79 years-old) in the French Individual and National Dietary Survey 2 (INCA2), and the French CIQUAL 2013 food-composition database of all the foods ( n = 1342 foods) declared as consumed in the population. Reference intakes of foods ("REF_FOOD") and nutrients ("REF_NUT") were calculated for each adult using the food-composition database and the amounts of foods self-reported in his/her dietary record. Then, answers to the FFQ were simulated for each adult based on his/her self-reported dietary record. "FFQ_FOOD" and "FFQ_NUT" intakes were estimated using the simulated answers and the item-composition database. Measurement errors (in %), spearman correlations and cross-classification were used to compare "REF_FOOD" with "FFQ_FOOD" and "REF_NUT" with "FFQ_NUT". Compared to "REF_NUT," "FFQ_NUT" total quantity and total energy intake were underestimated on average by 198 g/day and 666 kJ/day, respectively. "FFQ_FOOD" intakes were well estimated for starches, underestimated for most of the subgroups, and overestimated for some subgroups, in particular vegetables. Underestimation were mainly due to the use of portion sizes, leading to an underestimation of most of nutrients, except free sugars which were overestimated. The "structural validation" by simulating answers to a FFQ based on a reference dietary survey is innovative and pragmatic and allows quantifying the error induced by the simplification of the method of collection.
Understanding of the Elemental Diffusion Behavior in Concentrated Solid Solution Alloys
Zhang, Chuan; Zhang, Fan; Jin, Ke; ...
2017-07-13
As one of the core effects on the high-temperature structural stability, the so-called “sluggish diffusion effect” in high-entropy alloy (HEA) has attracted much attention. Experimental investigations on the diffusion kinetics have been carried out in a few HEA systems, such as Al-Co-Cr-Fe-Ni and Co-Cr-Fe-Mn-Ni. However, the mechanisms behind this effect remain unclear. To better understand the diffusion kinetics of the HEAs, a combined computational/experimental approach is employed in the current study. In the present work, a self-consistent atomic mobility database is developed for the face-centered cubic (fcc) phase of the Co-Cr-Fe-Mn-Ni quinary system. The simulated diffusion coefficients and concentration profilesmore » using this database can well describe the experimental data both from this work and the literatures. The validated mobility database is then used to calculate the tracer diffusion coefficients of Ni in the subsystems of the Co-Cr-Fe-Mn-Ni system with equiatomic ratios. The comparisons of these calculated diffusion coefficients reveal that the diffusion of Ni is not inevitably more sluggish with increasing number of components in the subsystem even with homologous temperature. Taking advantage of computational thermodynamics, the diffusivities of alloying elements with composition and/or temperature are also calculated. Furthermore, these calculations provide us an overall picture of the diffusion kinetics within the Co-Cr-Fe-Mn-Ni system.« less
Intake of energy and nutrients; harmonization of Food Composition Databases.
Martinez-Victoria, Emilio; Martinez de Victoria, Ignacio; Martinez-Burgos, M Alba
2015-02-26
Food composition databases (FCDBs) provide detailed information about the nutritional composition of foods. The conversion of food consumption into nutrient intake need a Food composition database (FCDB) which lists the mean nutritional values for a given food portion. The limitations of FCDBs are sometimes little known by the users. Multicentre studies have raised several methodology challenges which allow to standardize nutritional assessments in different populations and geographical areas for food composition and nutrient intake. Differences between FCDBs include those attributed to technical matters, such as description of foods, calculation of energy and definition of nutrients, analytical methods, and principles for recipe calculation. Such differences need to be identified and eliminated before comparing data from different studies, especially when dietary data is related to a health outcome. There are ongoing efforts since 1984 to standardize FCDBs over the world (INFOODS, EPIC, EuroFIR, etc.). Food composition data can be gathered from different sources like private company analysis, universities, government laboratories and food industry. They can also be borrowed from scientific literature or even from the food labelling. There are different proposals to evaluate the quality of food composition data. For the development of a FCDB it is fundamental document in the most detailed way, each of the data values of the different components and nutrients of a food. The objective of AECOSAN (Agencia Española de Consumo Seguridad Alimentaria y Nutrición) and BEDCA (Base de Datos Española de Composición de Alimentos) association was the development and support of a reference FCDB in Spain according to the standards to be defined in Europe. BEDCA is currently the only FCDB developed in Spain with compiled and documented data following EuroFIR standards. Copyright AULA MEDICA EDICIONES 2015. Published by AULA MEDICA. All rights reserved.
The National Food and Nutrient Analysis Program: A decade of progress
Haytowitz, David B.; Pehrsson, Pamela R.; Holden, Joanne M.
2009-01-01
The National Food and Nutrient Analysis Program (NFNAP) was designed to expand the quantity and improve the quality of data in the United States Department of Agriculture (USDA) food composition databases through the collection and analysis of nationally representative samples of foods and beverages. This paper describes some of the findings from the NFNAP and its impact on the food composition databases produced by USDA. The NFNAP employs statistically valid sampling plans, comprehensive quality control, and USDA analytical oversight as part of the program to generate new and updated analytical data for food components. USDA food consumption and composition data were used to target those foods that are major contributors of nutrients of public health significance to the U.S. diet (454 Key Foods). Foods were ranked using a scoring system, divided into quartiles, and reviewed to determine the impact of changes in their composition compared to historical values. Foods were purchased from several types of locations, such as retail outlets and fast food restaurants in different geographic areas as determined by the sampling plan, then composited and sent for analysis to commercial laboratories and cooperators, along with quality control materials. Comparisons were made to assess differences between new NFNAP means generated from original analytical data and historical means. Recently generated results for nationally representative food samples show marked changes compared to database values for selected nutrients from unknown or non-representative sampling. A number of changes were observed in many high consumption foods, e.g. the vitamin A value for cooked carrots decreased from 1,225 to 860 RAE/100g; the fat value for fast food French fried potatoes increased by 13% (14.08 to 17.06 g/100g). Trans fatty acids in margarine have decreased as companies reformulate their products in response to the required addition of trans fatty acids content on the nutrition label. Values decreased from 19.7 g/100 in 2002 to 14.8 g/100 in 2006 for 80%-fat stick margarines and to 4.52 g/100 g for 80%-fat tub margarines. These changes reflect improved strategies for sampling and analysis of representative food samples, which enhance the reliability of nutrient estimates for Key Foods and subsequent assessments of nutrient intake. PMID:19578546
Thermodynamics of soluble fission products cesium and iodine in the Molten Salt Reactor
NASA Astrophysics Data System (ADS)
Capelli, E.; Beneš, O.; Konings, R. J. M.
2018-04-01
The present study describes the full thermodynamic assessment of the Li,Cs,Th//F,I system. The existing database for the relevant fluoride salts considered as fuel for the Molten Salt Reactor (MSR) has been extended with two key fission products, cesium and iodine. A complete evaluation of all the common-ion binary and ternary sub-systems of the LiF-ThF4-CsF-LiI-ThI4-CsI system has been performed and the optimized parameters are presented in this work. New equilibrium data have been measured using Differential Scanning Calorimetry and were used to assess the reciprocal ternary systems and confirm the extrapolated phase diagrams. The developed database significantly contributes to the understanding of the behaviour of cesium and iodine in the MSR, which strongly depends on their concentration and chemical form. Cesium bonded with fluorine is well retained in the fuel mixture while in the form of CsI the solubility of these elements is very limited. Finally, the influence of CsI and CsF on the physico-chemical properties of the fuel mixture was calculated as function of composition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D. K.; Taylor, A. S.; Edwards, T.B.
2005-06-26
The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less
National Nutrient Database for Standard Reference - Find Nutrient Value of Common Foods by Nutrient
... grams Household * required field USDA Food Composition Databases Software developed by the National Agricultural Library v.3.9.4.1 2018-06-11 NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility Statement | Information Quality | Privacy ...
Creep and creep-recovery of a thermoplastic resin and composite
NASA Technical Reports Server (NTRS)
Hiel, Clem
1988-01-01
The database on advanced thermoplastic composites, which is currently available to industry, contains little data on the creep and viscoelastic behavior. This behavior is nevertheless considered important, particularly for extended-service reliability in structural applications. The creep deformation of a specific thermoplastic resin and composite is reviewed. The problem to relate the data obtained on the resin to the data obtained on the composite is discussed.
Korošec, Peter; Eftimov, Tome; Ocke, Marga; van der Laan, Jan; Roe, Mark; Berry, Rachel; Turrini, Aida; Krems, Carolin; Slimani, Nadia; Finglas, Paul
2018-01-01
This paper identifies the requirements for computer-supported food matching, in order to address not only national and European but also international current related needs and represents an integrated research contribution of the FP7 EuroDISH project. The available classification and coding systems and the specific problems of food matching are summarized and a new concept for food matching based on optimization methods and machine-based learning is proposed. To illustrate and test this concept, a study has been conducted in four European countries (i.e., Germany, The Netherlands, Italy and the UK) using different classification and coding systems. This real case study enabled us to evaluate the new food matching concept and provide further recommendations for future work. In the first stage of the study, we prepared subsets of food consumption data described and classified using different systems, that had already been manually matched with national food composition data. Once the food matching algorithm was trained using this data, testing was performed on another subset of food consumption data. Experts from different countries validated food matching between consumption and composition data by selecting best matches from the options given by the matching algorithm without seeing the result of the previously made manual match. The evaluation of study results stressed the importance of the role and quality of the food composition database as compared to the selected classification and/or coding systems and the need to continue compiling national food composition data as eating habits and national dishes still vary between countries. Although some countries managed to collect extensive sets of food consumption data, these cannot be easily matched with food composition data if either food consumption or food composition data are not properly classified and described using any classification and coding systems. The study also showed that the level of human expertise played an important role, at least in the training stage. Both sets of data require continuous development to improve their quality in dietary assessment. PMID:29601516
Gamma-Ray Signatures Improvement of the EURITRACK Tagged Neutron Inspection System Database
NASA Astrophysics Data System (ADS)
Kanawati, Wassila El; Carasco, Cedric; Perot, Bertrand; Mariani, Alain; Raoux, Anne-Cecile; Valkovic, Vladivoj; Sudac, Davorin; Obhodas, Jasmina; Baricevic, Martina
2010-10-01
The EURopean Illicit TRAfficking Countermeasures Kit (EURITRACK) inspection system uses 14 MeV neutrons produced by the D(T,n α) reaction to detect explosives in cargo containers. Reactions induced by fast neutrons inside the container produce gamma rays, which are detected in coincidence with the associated alpha particle, the detection of which allows the neutron direction to be determined. The neutron path length is obtained from a neutron time-of-flight measurement, thus allowing the origin of the gamma rays inside the container to be determined, while the chemical composition of the target material is correlated with their energy spectrum. Gamma-ray spectra have been collected with the inspection portal equipped with large volume NaI (Tl) detectors, in order to build a database of signatures for various elements (C, O, N, Fe, Pb, Al, Na, Si, Cl, Cu, Zn) with a low energy threshold of 0.6 MeV. The spectra are compared with previous ones, which were acquired with a 1.35 MeV threshold. The new library is currently being tested to unfold the energy spectra of transported goods into elemental contributions. Results are compared with data processed with the old 1.35 MeV threshold database, thus illustrating the improvement for material identification.
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Szymaś, J; Gawroński, M
1993-01-01
The composition assumed our experience in creating and using multimedial data base of examination questions and management system, which is used for. This system is implemented on microcomputers compatible with IBM PC and works in network system Net Ware 3.11. The test questions exceeded 2000 until now. The packet consists of the two functionally individual programs: ASSISTANT, which is the administrator for the databases, and EXAMINATOR which is the executive program. This system enables to use text files and add images to each question, which are adjusted to display on standard graphics devices (VGA). Standard format of the notation files enables to elaborate the results in order to estimate the scale of answers and to find correlations between the results.
NASA Technical Reports Server (NTRS)
1996-01-01
The bibliography contains citations concerning techniques and results of testing metal matrix composites for fatigue and fracture. Methods include non-destructive testing techniques, and static and cyclic techniques for assessing compression, tensile, bending, and impact characteristics.
DEVELOPMENT OF A COMPOSITION DATABASE FOR SELECTED MULTICOMPONENT OILS
During any oil spill incident, the properties of the spilled oil, including its chemical composition, physical properties, and changes due to weathering, are immediately important. U.S. EPA is currently developing new models for application to environmental problems associated...
Gurinović, Mirjana; Milešević, Jelena; Kadvan, Agnes; Djekić-Ivanković, Marija; Debeljak-Martačić, Jasmina; Takić, Marija; Nikolić, Marina; Ranković, Slavica; Finglas, Paul; Glibetić, Maria
2016-02-15
Within the European Food Information Resource Network of Excellence (EuroFIR NoE; FP6) and EuroFIR Nexus (FP7) project paucity in food composition databases (FCDB) in the Central Eastern Europe/Balkan (CEE/B) region was identified. As a member of EuroFIR NoE, the Centre of Research Excellence in Nutrition and Metabolism, Serbia initiated creation of the 1st online Serbian FCDB employing EuroFIR quality framework and CEN Food Data Standard requirements, supporting capacity development and designing the web-based Food Composition Data Management (FCDM) software for FCDB building. The 1st online version of Serbian FCDB was launched in 2007, and then extended with food composition data from other Balkan countries (Balkan Food Platform-Regional FCDB). All foods are indexed using LanguaL Thesaurus and coded with EFSA FoodEx2 coding system. To date, upgraded Serbian FCDB with 1046 foods and 129 traditional/common Serbian composite dishes is a prerequisite for nutritional research in Serbia, CEE/B region and wider Europe. Copyright © 2015 Elsevier Ltd. All rights reserved.
Upper Stage Engine Composite Nozzle Extensions
NASA Technical Reports Server (NTRS)
Valentine, Peter G.; Allen, Lee R.; Gradl, Paul R.; Greene, Sandra E.; Sullivan, Brian J.; Weller, Leslie J.; Koenig, John R.; Cuneo, Jacques C.; Thompson, James; Brown, Aaron;
2015-01-01
Carbon-carbon (C-C) composite nozzle extensions are of interest for use on a variety of launch vehicle upper stage engines and in-space propulsion systems. The C-C nozzle extension technology and test capabilities being developed are intended to support National Aeronautics and Space Administration (NASA) and United States Air Force (USAF) requirements, as well as broader industry needs. Recent and on-going efforts at the Marshall Space Flight Center (MSFC) are aimed at both (a) further developing the technology and databases for nozzle extensions fabricated from specific CC materials, and (b) developing and demonstrating low-cost capabilities for testing composite nozzle extensions. At present, materials development work is concentrating on developing a database for lyocell-based C-C that can be used for upper stage engine nozzle extension design, modeling, and analysis efforts. Lyocell-based C-C behaves in a manner similar to rayon-based CC, but does not have the environmental issues associated with the use of rayon. Future work will also further investigate technology and database gaps and needs for more-established polyacrylonitrile- (PAN-) based C-C's. As a low-cost means of being able to rapidly test and screen nozzle extension materials and structures, MSFC has recently established and demonstrated a test rig at MSFC's Test Stand (TS) 115 for testing subscale nozzle extensions with 3.5-inch inside diameters at the attachment plane. Test durations of up to 120 seconds have been demonstrated using oxygen/hydrogen propellants. Other propellant combinations, including the use of hydrocarbon fuels, can be used if desired. Another test capability being developed will allow the testing of larger nozzle extensions (13.5- inch inside diameters at the attachment plane) in environments more similar to those of actual oxygen/hydrogen upper stage engines. Two C-C nozzle extensions (one lyocell-based, one PAN-based) have been fabricated for testing with the larger-scale facility.
New DMSP Database of Precipitating Auroral Electrons and Ions.
Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J
2017-08-01
Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.
Java-based PACS and reporting system for nuclear medicine
NASA Astrophysics Data System (ADS)
Slomka, Piotr J.; Elliott, Edward; Driedger, Albert A.
2000-05-01
In medical imaging practice, images and reports often need be reviewed and edited from many locations. We have designed and implemented a Java-based Remote Viewing and Reporting System (JaRRViS) for a nuclear medicine department, which is deployed as a web service, at the fraction of the cost dedicated PACS systems. The system can be extended to other imaging modalities. JaRRViS interfaces to the clinical patient databases of imaging workstations. Specialized nuclear medicine applets support interactive displays of data such as 3-D gated SPECT with all the necessary options such as cine, filtering, dynamic lookup tables, and reorientation. The reporting module is implemented as a separate applet using Java Foundation Classes (JFC) Swing Editor Kit and allows composition of multimedia reports after selection and annotation of appropriate images. The reports are stored on the server in the HTML format. JaRRViS uses Java Servlets for the preparation and storage of final reports. The http links to the reports or to the patient's raw images with applets can be obtained from JaRRViS by any Hospital Information System (HIS) via standard queries. Such links can be sent via e-mail or included as text fields in any HIS database, providing direct access to the patient reports and images via standard web browsers.
Update of NDL’s list of key foods based on the 2007-2008 WWEIA-NHANES
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory is responsible for developing authoritative nutrient databases that contain a wide range of food composition values of the nation's food supply. This requires updating and revising the USDA Nutrient Database for Standard Reference (SR) and developing various special int...
Development and Testing of Carbon-Carbon Nozzle Extensions for Upper Stage Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Valentine, Peter G.; Gradl, Paul R.; Greene, Sandra E.
2017-01-01
Carbon-carbon (C-C) composite nozzle extensions are of interest for use on a variety of launch vehicle upper stage engines and in-space propulsion systems. The C-C nozzle extension technology and test capabilities being developed are intended to support National Aeronautics and Space Administration (NASA) and Department of Defense (DOD) requirements, as well as those of the broader Commercial Space industry. For NASA, C-C nozzle extension technology development primarily supports the NASA Space Launch System (SLS) and NASA's Commercial Space partners. Marshall Space Flight Center (MSFC) efforts are aimed at both (a) further developing the technology and databases needed to enable the use of composite nozzle extensions on cryogenic upper stage engines, and (b) developing and demonstrating low-cost capabilities for testing and qualifying composite nozzle extensions. Recent, on-going, and potential future work supporting NASA, DOD, and Commercial Space needs will be discussed. Information to be presented will include (a) recent and on-going mechanical, thermal, and hot-fire testing, as well as (b) potential future efforts to further develop and qualify domestic C-C nozzle extension solutions for the various upper stage engines under development.
NASA Astrophysics Data System (ADS)
Stephen, N. R.
2016-08-01
IR spectroscopy is used to infer composition of extraterrestrial bodies, comparing bulk spectra to databases of separate mineral phases. We extract spatially resolved meteorite-specific spectra from achondrites with respect to zonation and orientation.
The MELISSA food data base: space food preparation and process optimization
NASA Astrophysics Data System (ADS)
Creuly, Catherine; Poughon, Laurent; Pons, A.; Farges, Berangere; Dussap, Claude-Gilles
Life Support Systems have to deal with air, water and food requirement for a crew, waste management and also to the crew's habitability and safety constraints. Food can be provided from stocks (open loops) or produced during the space flight or on an extraterrestrial base (what implies usually a closed loop system). Finally it is admitted that only biological processes can fulfil the food requirement of life support system. Today, only a strictly vegetarian source range is considered, and this is limited to a very small number of crops compared to the variety available on Earth. Despite these constraints, a successful diet should have enough variety in terms of ingredients and recipes and sufficiently high acceptability in terms of acceptance ratings for individual dishes to remain interesting and palatable over a several months period and an adequate level of nutrients commensurate with the space nutritional requirements. In addition to the nutritional aspects, others parameters have to be considered for the pertinent selection of the dishes as energy consumption (for food production and transformation), quantity of generated waste, preparation time, food processes. This work concerns a global approach called MELISSA Food Database to facilitate the cre-ation and the management of these menus associated to the nutritional, mass, energy and time constraints. The MELISSA Food Database is composed of a database (MySQL based) con-taining multiple information among others crew composition, menu, dishes, recipes, plant and nutritional data and of a web interface (PHP based) to interactively access the database and manage its content. In its current version a crew is defined and a 10 days menu scenario can be created using dishes that could be cooked from a set of limited fresh plant assumed to be produced in the life support system. The nutritional covering, waste produced, mass, time and energy requirements are calculated allowing evaluation of the menu scenario and its interactions with the life support system and filled with the information on food processes and equipment suitable for use in Advanced Life Support System. The MELISSA database is available on the server of the University Blaise Pascal (Clermont Université) with an authorized access at the address http://marseating.univ-bpclermont.fr. In the future, the challenge is to complete this database with specific data related to the MELISSA project. Plants chambers in the pilot plant located in Universitat Aut`noma de Barcelona will give nutritional and process data on crops cultivation.
Acheson, R J; Woerner, D R; Martin, J N; Belk, K E; Engle, T E; Brown, T R; Brooks, J C; Luna, A M; Thompson, L D; Grimes, H L; Arnold, A N; Savell, J W; Gehring, K B; Douglass, L W; Howe, J C; Patterson, K Y; Roseland, J M; Williams, J R; Cifelli, A; Leheska, J M; McNeill, S H
2015-12-01
Beef nutrition research has become increasingly important domestically and internationally for the beef industry and its consumers. The objective of this study was to analyze the nutrient composition of ten beef loin and round cuts to update the nutrient data in the USDA National Nutrient Database for Standard Reference. Seventy-two carcasses representing a national composite of Yield Grade, Quality Grade, sex classification, and genetic type were identified from six regions across the U.S. Beef short loins, strip loins, tenderloins, inside rounds, and eye of rounds (NAMP # 173, 175, 190A, 169A, and 171C) were collected from the selected carcasses and shipped to three university meat laboratories for storage, retail fabrication, and raw/cooked analysis of nutrients. Sample homogenates from each animal were analyzed for proximate composition. These data provide updated information regarding the nutrient status of beef, in addition, to determining the influence of Quality Grade, Yield Grade, and sex classification on nutrient composition. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Fallah-Mehrjardi, Ata; Hidayat, Taufiq; Hayes, Peter C.; Jak, Evgueni
2017-12-01
Experimental studies were undertaken to determine the gas/slag/matte/tridymite equilibria in the Cu-Fe-O-S-Si system at 1473 K (1200 °C), P(SO2) = 0.25 atm, and a range of P(O2)'s. The experimental methodology involved high-temperature equilibration using a substrate support technique in controlled gas atmospheres (CO/CO2/SO2/Ar), rapid quenching of equilibrium phases, followed by direct measurement of the chemical compositions of the phases with Electron Probe X-ray Microanalysis (EPMA). The experimental data for slag and matte were presented as a function of copper concentration in matte (matte grade). The data provided are essential for the evaluation of the effect of oxygen potential under controlled atmosphere on the matte grade, liquidus composition of slag and chemically dissolved copper in slag. The new data provide important accurate and reliable quantitative foundation for improvement of the thermodynamic databases for copper-containing systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffens, H.D.; Kern, H.; Janczak, J.
The potential benefits and the current state-of-the-art in MMCs will be presented through a discussion of their processing and related aspects. The advantages and limitations of most common manufacturing techniques of fiber reinforced metals, e.g. realized property potential and commercial possibilities, will be outlined. The emphasis will be given on novel powder metallurgy techniques such as rapid solidification (e.g. atomization techniques and plasma processes) and new materials systems (e.g. intermetallic matrix composites). The technical barriers which prevent the transition of MMCs from aerospace to a wider range of applications will be highlighted, Special attention will be drawn to the relationmore » between processing parameters, fiber-matrix interface and composite properties. The challenge of composite modeling and design as well as interface controlling for successful processing utilization of MMCs will be mentioned. The benefits of use of computer techniques (databases, simulations, knowledge based systems) to aid the composite design and process control (fuzzy logic) will be shown on several examples. The technical possibilities of adaptation of interface tailoring approaches from the PMC area such as graded interphase or rubber-bumper interface will be studied. In addition, on the basis of recent forecasts by different experts on composite materials the question of the MMCs future will be discussed. Have they a chance in the next few years to meet the requirements of successful commercial applications, especially those of clients? The problems which have to be solved and options for solution will be dealt with.« less
Corpus-based Statistical Screening for Phrase Identification
Kim, Won; Wilbur, W. John
2000-01-01
Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469
Gaspari, Marco; Chiesa, Luca; Nicastri, Annalisa; Gabriele, Caterina; Harper, Valeria; Britti, Domenico; Cuda, Giovanni; Procopio, Antonio
2016-12-06
The ability of tandem mass spectrometry to determine the primary structure of proteolytic peptides can be exploited to trace back the organisms from which the corresponding proteins were extracted. This information can be important when food products, such as protein powders, can be supplemented with lower-quality starting materials. In order to dissect the origin of proteinaceous material composing a given unknown mixture, a two-step database search strategy for bottom-up nanoscale liquid chromatography-tandem mass spectrometry (nanoLC-MS/MS) data was implemented. A single nanoLC-MS/MS analysis was sufficient not only to determine the qualitative composition of the mixtures under examination, but also to assess the relative percent composition of the various proteomes, if dedicated calibration curves were previously generated. The approach of two-step database search for qualitative analysis and proteome total ion current (pTIC) calculation for quantitative analysis was applied to several binary and ternary mixtures which mimic the composition of milk replacers typically used in calf feeding.
NASA Astrophysics Data System (ADS)
Klotz, Bradley W.; Jiang, Haiyan
2016-10-01
A 12 year global database of rain-corrected satellite scatterometer surface winds for tropical cyclones (TCs) is used to produce composites of TC surface wind speed distributions relative to vertical wind shear and storm motion directions in each TC-prone basin and various TC intensity stages. These composites corroborate ideas presented in earlier studies, where maxima are located right of motion in the Earth-relative framework. The entire TC surface wind asymmetry is down motion left for all basins and for lower strength TCs after removing the motion vector. Relative to the shear direction, the motion-removed composites indicate that the surface wind asymmetry is located down shear left for the outer region of all TCs, but for the inner-core region it varies from left of shear to down shear right for different basin and TC intensity groups. Quantification of the surface wind asymmetric structure in further stratifications is a necessary next step for this scatterometer data set.
MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.
NASA Astrophysics Data System (ADS)
Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro
2014-05-01
The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Tschanz, J.; Monarch, M.
1996-05-01
The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less
Mosecker, Linda; Saeed-Akbari, Alireza
2013-06-01
Nitrogen in austenitic stainless steels and its effect on the stacking fault energy (SFE) has been the subject of intense discussions in the literature. Until today, no generally accepted method for the SFE calculation exists that can be applied to a wide range of chemical compositions in these systems. Besides different types of models that are used from first-principle to thermodynamics-based approaches, one main reason is the general lack of experimentally measured SFE values for these steels. Moreover, in the respective studies, not only different alloying systems but also different domains of nitrogen contents were analyzed resulting in contrary conclusions on the effect of nitrogen on the SFE. This work gives a review on the current state of SFE calculation by computational thermodynamics for the Fe-Cr-Mn-N system. An assessment of the thermodynamic effective Gibbs free energy, [Formula: see text], model for the [Formula: see text] phase transformation considering existing data from different literature and commercial databases is given. Furthermore, we introduce the application of a non-constant composition-dependent interfacial energy, б γ / ε , required to consider the effect of nitrogen on SFE in these systems.
Mosecker, Linda; Saeed-Akbari, Alireza
2013-01-01
Nitrogen in austenitic stainless steels and its effect on the stacking fault energy (SFE) has been the subject of intense discussions in the literature. Until today, no generally accepted method for the SFE calculation exists that can be applied to a wide range of chemical compositions in these systems. Besides different types of models that are used from first-principle to thermodynamics-based approaches, one main reason is the general lack of experimentally measured SFE values for these steels. Moreover, in the respective studies, not only different alloying systems but also different domains of nitrogen contents were analyzed resulting in contrary conclusions on the effect of nitrogen on the SFE. This work gives a review on the current state of SFE calculation by computational thermodynamics for the Fe–Cr–Mn–N system. An assessment of the thermodynamic effective Gibbs free energy, , model for the phase transformation considering existing data from different literature and commercial databases is given. Furthermore, we introduce the application of a non-constant composition-dependent interfacial energy, бγ/ε, required to consider the effect of nitrogen on SFE in these systems. PMID:27877573
Critical evaluation and thermodynamic optimization of the Iron-Rare-Earth systems
NASA Astrophysics Data System (ADS)
Konar, Bikram
Rare-Earth elements by virtue of its typical magnetic, electronic and chemical properties are gaining importance in power, electronics, telecommunications and sustainable green technology related industries. The Magnets from RE-alloys are more powerful than conventional magnets which have more longevity and high temperature workability. The dis-equilibrium in the Rare-Earth element supply and demand has increased the importance of recycling and extraction of REE's from used permanent Magnets. However, lack of the thermodynamic data on RE alloys has made it difficult to design an effective extraction and recycling process. In this regard, Computational Thermodynamic calculations can serve as a cost effective and less time consuming tool to design a waste magnet recycling process. The most common RE permanent magnet is Nd magnet (Nd 2Fe14B). Various elements such as Dy, Tb, Pr, Cu, Co, Ni, etc. are also added to increase its magnetic and mechanical properties. In order to perform reliable thermodynamic calculations for the RE recycling process, accurate thermodynamic database for RE and related alloys are required. The thermodynamic database can be developed using the so-called CALPHAD method. The database development based on the CALPHAD method is essentially the critical evaluation and optimization of all available thermodynamic and phase diagram data. As a results, one set of self-consistent thermodynamic functions for all phases in the given system can be obtained, which can reproduce all reliable thermodynamic and phase diagram data. The database containing the optimized Gibbs energy functions can be used to calculate complex chemical reactions for any high temperature processes. Typically a Gibbs energy minimization routine, such as in FactSage software, can be used to obtain the accurate thermodynamic equilibrium in multicomponent systems. As part of a large thermodynamic database development for permanent magnet recycling and Mg alloy design, all thermodynamic and phase diagram data in the literature for the fourteen Fe-RE binary systems: Fe-La, Fe-Ce, Fe-Pr, Fe-Nd, Fe-Sm, Fe-Gd, Fe-Tb, Fe-Dy, Fe-Ho, Fe-Er, Fe-Tm, Fe-Lu, Fe-Sc and Fe-Y are critically evaluated and optimized to obtain thermodynamic model parameters. The model parameters can be used to calculate phase diagrams and Gibbs energies of all phases as functions of temperature and composition. This database can be incorporated with the present thermodynamic database in FactSage software to perform complex chemical reactions and phase diagram calculations for RE magnet recycling process.
Using Web-based Tutorials To Enhance Library Instruction.
ERIC Educational Resources Information Center
Kocour, Bruce G.
2000-01-01
Describes the development of a Web site for library instruction at Carson-Newman College (TN) and its integration into English composition courses. Describes the use of a virtual tour, a tutorial on database searching, tutorials on specific databases, and library guides to specific disciplines to create an effective mechanism for active learning.…
NASA Technical Reports Server (NTRS)
Beeson, Harold D.; Davis, Dennis D.; Ross, William L., Sr.; Tapphorn, Ralph M.
2002-01-01
This document represents efforts accomplished at the NASA Johnson Space Center White Sands Test Facility (WSTF) in support of the Enhanced Technology for Composite Overwrapped Pressure Vessels (COPV) Program, a joint research and technology effort among the U.S. Air Force, NASA, and the Aerospace Corporation. WSTF performed testing for several facets of the program. Testing that contributed to the Task 3.0 COPV database extension objective included baseline structural strength, failure mode and safe-life, impact damage tolerance, sustained load/impact effect, and materials compatibility. WSTF was also responsible for establishing impact protection and control requirements under Task 8.0 of the program. This included developing a methodology for establishing an impact control plan. Seven test reports detail the work done at WSTF. As such, this document contributes to the database of information regarding COPV behavior that will ensure performance benefits and safety are maintained throughout vessel service life.
Latest developments for the IAGOS database: Interoperability and metadata
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume
2014-05-01
In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for intercomparison. The optimal data transfer protocol is being investigated to insure the interoperability. To facilitate satellite and model validation, tools will be made available for co-location and comparison with IAGOS. We will enhance the JOIN application in order to properly display aircraft data as vertical profiles and along individual flight tracks and to allow for graphical comparison to model results that are accessible through interoperable web services, such as the daily products from the GMES/Copernicus atmospheric service.
Choosing an Optimal Database for Protein Identification from Tandem Mass Spectrometry Data.
Kumar, Dhirendra; Yadav, Amit Kumar; Dash, Debasis
2017-01-01
Database searching is the preferred method for protein identification from digital spectra of mass to charge ratios (m/z) detected for protein samples through mass spectrometers. The search database is one of the major influencing factors in discovering proteins present in the sample and thus in deriving biological conclusions. In most cases the choice of search database is arbitrary. Here we describe common search databases used in proteomic studies and their impact on final list of identified proteins. We also elaborate upon factors like composition and size of the search database that can influence the protein identification process. In conclusion, we suggest that choice of the database depends on the type of inferences to be derived from proteomics data. However, making additional efforts to build a compact and concise database for a targeted question should generally be rewarding in achieving confident protein identifications.
NASA Technical Reports Server (NTRS)
1996-01-01
Preliminary design guidelines necessary to assure electromagnetic compatibility (EMC) of spacecraft using composite materials, are presented. A database of electrical properties of composite materials which may have an effect on EMC is established. The guidelines concentrate on the composites that are conductive but may require enhancement to be adequate for EMC purposes. These composites are represented by graphite reinforced polymers. Methods for determining adequate conductivity levels for various EMC purposes are defined, along with the methods of design which increase conductivity of composite materials and joints to adequate levels.
A statistical view of FMRFamide neuropeptide diversity.
Espinoza, E; Carrigan, M; Thomas, S G; Shaw, G; Edison, A S
2000-01-01
FMRFamide-like peptide (FLP) amino acid sequences have been collected and statistically analyzed. FLP amino acid composition as a function of position in the peptide is graphically presented for several major phyla. Results of total amino acid composition and frequencies of pairs of FLP amino acids have been computed and compared with corresponding values from the entire GenBank protein sequence database. The data for pairwise distributions of amino acids should help in future structure-function studies of FLPs. To aid in future peptide discovery, a computer program and search protocol was developed to identify FLPs from the GenBank protein database without the use of keywords.
Design and Development of a Web-Based Self-Monitoring System to Support Wellness Coaching.
Zarei, Reza; Kuo, Alex
2017-01-01
We analyzed, designed and deployed a web-based, self-monitoring system to support wellness coaching. A wellness coach can plan for clients' exercise and diet through the system and is able to monitor the changes in body dimensions and body composition that the client reports. The system can also visualize the client's data in form of graphs for both the client and the coach. Both parties can also communicate through the messaging feature embedded in the application. A reminder system is also incorporated into the system and sends reminder messages to the clients when their reporting is due. The web-based self-monitoring application uses Oracle 11g XE as the backend database and Application Express 4.2 as user interface development tool. The system allowed users to access, update and modify data through web browser anytime, anywhere, and on any device.
Variations in measured performance of CAD schemes due to database composition and scoring protocol
NASA Astrophysics Data System (ADS)
Nishikawa, Robert M.; Yarusso, Laura M.
1998-06-01
There is now a large effort towards developing computer- aided diagnosis (CAD) techniques. It is important to be able to compare performance of different approaches to be able to determine which ones are the most efficacious. There are currently a number of barriers preventing meaningful (statistical) comparisons, two of which are discussed in this paper: database composition and scoring protocol. We have examined how the choice of cases used to test a CAD scheme can affect its performance. We found that our computer scheme varied between a sensitivity of 100% to 77%, at a false-positive rate of 1.0 per image, with only 100% change in the composition of the database. To evaluate the performance of a CAD scheme the output of the computer must be graded. There are a number of different criteria that are being used by different investigators. We have found that for the same set of detection results, the measured sensitivity can be between 40 - 90% depending on the scoring methodology. Clearly consensus must be reached on these two issues in order for the field to make rapid progress. As it stands now, it is not possible to make meaningful comparisons of different techniques.
NASA Astrophysics Data System (ADS)
Brennan, S. T.; East, J. A., II; Garrity, C. P.
2015-12-01
In 2013, Congress passed the Helium Stewardship Act requiring the U.S. Geological Survey (USGS) to undertake a national helium gas resource assessment to determine the nation's helium resources. An important initial component necessary to complete this assessment was the development of a comprehensive database of Helium (He) concentrations from petroleum exploration wells. Because Helium is often used as the carrier gas for compositional analyses for commercial and exploratory oil and gas wells, this limits the available helium concentration data. A literature search in peer-reviewed publications, state geologic survey databases, USGS energy geochemical databases, and the Bureau of Land Management databases provided approximately 16,000 data points from wells that had measurable He concentrations in the gas composition analyses. The data from these wells includes, date of sample collection, American Petroleum Institute well number, formation name, field name, depth of sample collection, and location. The gas compositional analyses, some performed as far back as 1934, do not all have the same level of precision and accuracy, therefore the date of the analysis is critical to the assessment as it indicates the relative amount of uncertainty in the analytical results. Non-proprietary data was used to create a GIS based interactive web interface that allows users to visualize, inspect, interact, and download our most current He data. The user can click on individual locations to see the available data at that location, as well as zoom in and out on a data density map. Concentrations on the map range from .04 mol% (lowest concentration of economic value) to 12% (highest naturally occurring values). This visual interface will allow users to develop a rapid appreciation of the areas with the highest potential for high helium concentrations within oil and gas fields.
BIND: the Biomolecular Interaction Network Database
Bader, Gary D.; Betel, Doron; Hogue, Christopher W. V.
2003-01-01
The Biomolecular Interaction Network Database (BIND: http://bind.ca) archives biomolecular interaction, complex and pathway information. A web-based system is available to query, view and submit records. BIND continues to grow with the addition of individual submissions as well as interaction data from the PDB and a number of large-scale interaction and complex mapping experiments using yeast two hybrid, mass spectrometry, genetic interactions and phage display. We have developed a new graphical analysis tool that provides users with a view of the domain composition of proteins in interaction and complex records to help relate functional domains to protein interactions. An interaction network clustering tool has also been developed to help focus on regions of interest. Continued input from users has helped further mature the BIND data specification, which now includes the ability to store detailed information about genetic interactions. The BIND data specification is available as ASN.1 and XML DTD. PMID:12519993
PlantDB – a versatile database for managing plant research
Exner, Vivien; Hirsch-Hoffmann, Matthias; Gruissem, Wilhelm; Hennig, Lars
2008-01-01
Background Research in plant science laboratories often involves usage of many different species, cultivars, ecotypes, mutants, alleles or transgenic lines. This creates a great challenge to keep track of the identity of experimental plants and stored samples or seeds. Results Here, we describe PlantDB – a Microsoft® Office Access database – with a user-friendly front-end for managing information relevant for experimental plants. PlantDB can hold information about plants of different species, cultivars or genetic composition. Introduction of a concise identifier system allows easy generation of pedigree trees. In addition, all information about any experimental plant – from growth conditions and dates over extracted samples such as RNA to files containing images of the plants – can be linked unequivocally. Conclusion We have been using PlantDB for several years in our laboratory and found that it greatly facilitates access to relevant information. PMID:18182106
Retroperitoneal composite pheochromocytoma-ganglioneuroma : a case report and review of literature
2013-01-01
Abstract Composite pheochromocytoma/paraganglioma is a rare tumor with elements of pheochromocytoma/paraganglioma and neurogenic tumor. Most were located in the adrenal glands, and extra-adrenal composite pheochromocytoma is extremely rare. Only 4 cases in the retroperitoneum have been described in the online database PUBMED. Here, we report a case of retroperitoneal extra-adrenal composite pheochromocytoma and review the related literature. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1700539911908679 PMID:23587063
Comprehensive T-matrix Reference Database: A 2009-2011 Update
NASA Technical Reports Server (NTRS)
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
2012-01-01
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
An approach to monitor food and nutrition from "factory to fork".
Slining, Meghan M; Yoon, Emily Ford; Davis, Jessica; Hollingsworth, Bridget; Miles, Donna; Ng, Shu Wen
2015-01-01
Accurate, adequate, and timely food and nutrition information is necessary in order to monitor changes in the US food supply and assess their impact on individual dietary intake. Our aim was to develop an approach that links time-specific purchase and consumption data to provide updated, market representative nutrient information. We utilized household purchase data (Nielsen Homescan, 2007-2008), self-reported dietary intake data (What We Eat in America [WWEIA], 2007-2008), and two sources of nutrition composition data. This Factory to Fork Crosswalk approach connected each of the items reported to have been obtained from stores from the 2007-2008 cycle of the WWEIA dietary intake survey to corresponding food and beverage products that were purchased by US households during the equivalent time period. Using nutrition composition information and purchase data, an alternate Crosswalk-based nutrient profile for each WWEIA intake code was created weighted by purchase volume of all corresponding items. Mean intakes of daily calories, total sugars, sodium, and saturated fat were estimated. Differences were observed in the mean daily calories, sodium, and total sugars reported consumed from beverages, yogurts, and cheeses, depending on whether the Food and Nutrient Database for Dietary Studies 4.1 or the alternate nutrient profiles were used. The Crosswalk approach augments national nutrition surveys with commercial food and beverage purchases and nutrient databases to capture changes in the US food supply from factory to fork. The Crosswalk provides a comprehensive and representative measurement of the types, amounts, prices, locations and nutrient composition of consumer packaged goods foods and beverages consumed in the United States. This system has potential to be a major step forward in understanding the consumer packaged goods sector of the US food system and the impacts of the changing food environment on human health. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Tracing Boundaries, Effacing Boundaries: Information Literacy as an Academic Discipline
ERIC Educational Resources Information Center
Veach, Grace
2012-01-01
Both librarianship and composition have been shaken by recent developments in higher education. In libraries ebooks and online databases threaten the traditional "library as warehouse model," while in composition, studies like The Citation Project show that students are not learning how to incorporate sources into their own writing…
Braided Composites for Aerospace Applications. (Latest citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1996-01-01
The bibliography contains citations concerning the design, fabrication, and testing of structural composites formed by braiding machines. Topics include computer aided design and associated computer aided manufacture of braided tubular and flat forms. Applications include aircraft and spacecraft structures, where high shear strength and stiffness are required.
The USDA Table of Cooking Yields for Meat and Poultry
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory (NDL) at the USDA conducts food composition research to develop accurate, unbiased, and representative food and nutrient composition data which are released as the USDA National Nutrient Database for Standard Reference (SR). SR is used as the foundation of most other foo...
MIDAS: a database-searching algorithm for metabolite identification in metabolomics.
Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle
2014-10-07
A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.
Crime scene investigations using portable, non-destructive space exploration technology
NASA Technical Reports Server (NTRS)
Trombka, Jacob I.; Schweitzer, Jeffrey; Selavka, Carl; Dale, Mark; Gahn, Norman; Floyd, Samuel; Marie, James; Hobson, Maritza; Zeosky, Jerry; Martin, Ken;
2002-01-01
The National Institute of Justice (NIJ) and the National Aeronautics and Space Administration's (NASAs) Goddard Space Flight Center (GSFC) have teamed up to explore the use of NASA developed technologies to help criminal justice agencies and professionals solve crimes. The objective of the program is to produce instruments and communication networks that have application within both NASA's space program and NIJ programs with state and local forensic laboratories. A working group of NASA scientists and law enforcement professionals has been established to develop and implement a feasibility demonstration program. Specifically, the group has focused its efforts on identifying gunpowder and primer residue, blood, and semen at crime scenes. Non-destructive elemental composition identification methods are carried out using portable X-ray fluorescence (XRF) systems. These systems are similar to those being developed for planetary exploration programs. A breadboard model of a portable XRF system has been constructed for these tests using room temperature silicon and cadmium-zinc telluride (CZT) detectors. Preliminary tests have been completed with gunshot residue (GSR), blood-spatter and semen samples. Many of the element composition lines have been identified. Studies to determine the minimum detectable limits needed for the analyses of GSR, blood and semen in the crime scene environment have been initiated and preliminary results obtained. Furthermore, a database made up of the inorganic composition of GSR is being developed. Using data obtained from the open literature of the elemental composition of barium (Ba) and antimony (Sb) in handswipes of GSR, we believe that there may be a unique GSR signature based on the Sb to Ba ratio.
Ceramic Matrix Composites: High Temperature Effects. (Latest Citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1997-01-01
The bibliography contains citations concerning the development and testing of ceramic matrix composites for high temperature use. Tests examining effects of the high temperatures on bond strength, thermal degradation, oxidation, thermal stress, thermal fatigue, and thermal expansion properties are referenced. Applications of the composites include space structures, gas turbine and engine components, control surfaces for spacecraft and transatmospheric vehicles, heat shields, and heat exchangers.
Alternative method to validate the seasonal land cover regions of the conterminous United States
Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan
1996-01-01
An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...
Azeem, Rubeena Abdul; Sureshbabu, Nivedhitha Malli
2018-01-01
Composite resin, serves as esthetic alternative to amalgam and cast restorations. Posterior teeth can be restored using direct or indirect composite restorations. The selection between direct and indirect technique is a clinically challenging decision-making process. Most important influencing factor is the amount of remaining tooth substance. The aim of this systematic review was to compare the clinical performance of direct versus indirect composite restorations in posterior teeth. The databases searched included PubMed CENTRAL (until July 2015), Medline, and Cochrane Database of Systematic Reviews. The bibliographies of clinical studies and reviews identified in the electronic search were analyzed to identify studies which were published outside the electronically searched journals. The primary outcome measure was evaluation of the survival of direct and indirect composite restorations in posterior teeth. This review included thirteen studies in which clinical performance of various types of direct and indirect composite restorations in posterior teeth were compared. Out of the thirteen studies which were included seven studies had a high risk of bias and five studies had a moderate risk of bias. One study having a low risk of bias, concluded that there was no significant difference between direct and indirect technique. However, the available evidence revealed inconclusive results. Further research should focus on randomized controlled trials with long term follow-up to give concrete evidence on the clinical performce of direct and indirect composite restorations.
High-Temperature Cast Aluminum for Efficient Engines
NASA Astrophysics Data System (ADS)
Bobel, Andrew C.
Accurate thermodynamic databases are the foundation of predictive microstructure and property models. An initial assessment of the commercially available Thermo-Calc TCAL2 database and the proprietary aluminum database of QuesTek demonstrated a large degree of deviation with respect to equilibrium precipitate phase prediction in the compositional region of interest when compared to 3-D atom probe tomography (3DAPT) and transmission electron microscopy (TEM) experimental results. New compositional measurements of the Q-phase (Al-Cu-Mg-Si phase) led to a remodeling of the Q-phase thermodynamic description in the CALPHAD databases which has produced significant improvements in the phase prediction capabilities of the thermodynamic model. Due to the unique morphologies of strengthening precipitate phases commonly utilized in high-strength cast aluminum alloys, the development of new microstructural evolution models to describe both rod and plate particle growth was critical for accurate mechanistic strength models which rely heavily on precipitate size and shape. Particle size measurements through both 3DAPT and TEM experiments were used in conjunction with literature results of many alloy compositions to develop a physical growth model for the independent prediction of rod radii and rod length evolution. In addition a machine learning (ML) model was developed for the independent prediction of plate thickness and plate diameter evolution as a function of alloy composition, aging temperature, and aging time. The developed models are then compared with physical growth laws developed for spheres and modified for ellipsoidal morphology effects. Analysis of the effect of particle morphology on strength enhancement has been undertaken by modification of the Orowan-Ashby equation for 〈110〉 alpha-Al oriented finite rods in addition to an appropriate version for similarly oriented plates. A mechanistic strengthening model was developed for cast aluminum alloys containing both rod and plate-like precipitates. The model accurately accounts for the temperature dependence of particle nucleation and growth, solid solution strengthening, Si eutectic strength, and base aluminum yield strength. Strengthening model predictions of tensile yield strength are in excellent agreement with experimental observations over a wide range of aluminum alloy systems, aging temperatures, and test conditions. The developed models enable the prediction of the required particle morphology and volume fraction necessary to achieve target property goals in the design of future aluminum alloys. The effect of partitioning elements to the Q-phase was also considered for the potential to control the nucleation rate, reduce coarsening, and control the evolution of particle morphology. Elements were selected based on density functional theory (DFT) calculations showing the prevalence of certain elements to partition to the Q-phase. 3DAPT experiments were performed on Q-phase containing wrought alloys with these additions and show segregation of certain elements to the Q-phase with relative agreement to DFT predictions.
NASA Astrophysics Data System (ADS)
Sauzède, R.; Lavigne, H.; Claustre, H.; Uitz, J.; Schmechtig, C.; D'Ortenzio, F.; Guinet, C.; Pesant, S.
2015-04-01
In vivo chlorophyll a fluorescence is a proxy of chlorophyll a concentration, and is one of the most frequently measured biogeochemical properties in the ocean. Thousands of profiles are available from historical databases and the integration of fluorescence sensors to autonomous platforms led to a significant increase of chlorophyll fluorescence profile acquisition. To our knowledge, this important source of environmental data has not yet been included in global analyses. A total of 268 127 chlorophyll fluorescence profiles from several databases as well as published and unpublished individual sources were compiled. Following a robust quality control procedure detailed in the present paper, about 49 000 chlorophyll fluorescence profiles were converted in phytoplankton biomass (i.e. chlorophyll a concentration) and size-based community composition (i.e. microphytoplankton, nanophytoplankton and picophytoplankton), using a~method specifically developed to harmonize fluorescence profiles from diverse sources. The data span over five decades from 1958 to 2015, including observations from all major oceanic basins and all seasons, and depths ranging from surface to a median maximum sampling depth of around 700 m. Global maps of chlorophyll a concentration and phytoplankton community composition are presented here for the first time. Monthly climatologies were computed for three of Longhurst's ecological provinces in order to exemplify the potential use of the data product. Original data sets (raw fluorescence profiles) as well as calibrated profiles of phytoplankton biomass and community composition are available in open access at PANGAEA, Data Publisher for Earth and Environmental Science. Raw fluorescence profiles: http://doi.pangaea.de/10.1594/PANGAEA.844212 and Phytoplankton biomass and community composition: http://doi.pangaea.de/10.1594/PANGAEA.844485.
Information systems in food safety management.
McMeekin, T A; Baranyi, J; Bowman, J; Dalgaard, P; Kirk, M; Ross, T; Schmid, S; Zwietering, M H
2006-12-01
Information systems are concerned with data capture, storage, analysis and retrieval. In the context of food safety management they are vital to assist decision making in a short time frame, potentially allowing decisions to be made and practices to be actioned in real time. Databases with information on microorganisms pertinent to the identification of foodborne pathogens, response of microbial populations to the environment and characteristics of foods and processing conditions are the cornerstone of food safety management systems. Such databases find application in: Identifying pathogens in food at the genus or species level using applied systematics in automated ways. Identifying pathogens below the species level by molecular subtyping, an approach successfully applied in epidemiological investigations of foodborne disease and the basis for national surveillance programs. Predictive modelling software, such as the Pathogen Modeling Program and Growth Predictor (that took over the main functions of Food Micromodel) the raw data of which were combined as the genesis of an international web based searchable database (ComBase). Expert systems combining databases on microbial characteristics, food composition and processing information with the resulting "pattern match" indicating problems that may arise from changes in product formulation or processing conditions. Computer software packages to aid the practical application of HACCP and risk assessment and decision trees to bring logical sequences to establishing and modifying food safety management practices. In addition there are many other uses of information systems that benefit food safety more globally, including: Rapid dissemination of information on foodborne disease outbreaks via websites or list servers carrying commentary from many sources, including the press and interest groups, on the reasons for and consequences of foodborne disease incidents. Active surveillance networks allowing rapid dissemination of molecular subtyping information between public health agencies to detect foodborne outbreaks and limit the spread of human disease. Traceability of individual animals or crops from (or before) conception or germination to the consumer as an integral part of food supply chain management. Provision of high quality, online educational packages to food industry personnel otherwise precluded from access to such courses.
Changes in the Proteome of Xylem Sap in Brassica oleracea in Response to Fusarium oxysporum Stress
Pu, Zijing; Ino, Yoko; Kimura, Yayoi; Tago, Asumi; Shimizu, Motoki; Natsume, Satoshi; Sano, Yoshitaka; Fujimoto, Ryo; Kaneko, Kentaro; Shea, Daniel J.; Fukai, Eigo; Fuji, Shin-Ichi; Hirano, Hisashi; Okazaki, Keiichi
2016-01-01
Fusarium oxysporum f.sp. conlutinans (Foc) is a serious root-invading and xylem-colonizing fungus that causes yellowing in Brassica oleracea. To comprehensively understand the interaction between F. oxysporum and B. oleracea, composition of the xylem sap proteome of the non-infected and Foc-infected plants was investigated in both resistant and susceptible cultivars using liquid chromatography-tandem mass spectrometry (LC-MS/MS) after in-solution digestion of xylem sap proteins. Whole genome sequencing of Foc was carried out and generated a predicted Foc protein database. The predicted Foc protein database was then combined with the public B. oleracea and B. rapa protein databases downloaded from Uniprot and used for protein identification. About 200 plant proteins were identified in the xylem sap of susceptible and resistant plants. Comparison between the non-infected and Foc-infected samples revealed that Foc infection causes changes to the protein composition in B. oleracea xylem sap where repressed proteins accounted for a greater proportion than those of induced in both the susceptible and resistant reactions. The analysis on the proteins with concentration change > = 2-fold indicated a large portion of up- and down-regulated proteins were those acting on carbohydrates. Proteins with leucine-rich repeats and legume lectin domains were mainly induced in both resistant and susceptible system, so was the case of thaumatins. Twenty-five Foc proteins were identified in the infected xylem sap and 10 of them were cysteine-containing secreted small proteins that are good candidates for virulence and/or avirulence effectors. The findings of differential response of protein contents in the xylem sap between the non-infected and Foc-infected samples as well as the Foc candidate effectors secreted in xylem provide valuable insights into B. oleracea-Foc interactions. PMID:26870056
Flavonoid intake and all-cause mortality.
Ivey, Kerry L; Hodgson, Jonathan M; Croft, Kevin D; Lewis, Joshua R; Prince, Richard L
2015-05-01
Flavonoids are bioactive compounds found in foods such as tea, chocolate, red wine, fruit, and vegetables. Higher intakes of specific flavonoids and flavonoid-rich foods have been linked to reduced mortality from specific vascular diseases and cancers. However, the importance of flavonoids in preventing all-cause mortality remains uncertain. The objective was to explore the association between flavonoid intake and risk of 5-y mortality from all causes by using 2 comprehensive food composition databases to assess flavonoid intake. The study population included 1063 randomly selected women aged >75 y. All-cause, cancer, and cardiovascular mortalities were assessed over 5 y of follow-up through the Western Australia Data Linkage System. Two estimates of flavonoid intake (total flavonoidUSDA and total flavonoidPE) were determined by using food composition data from the USDA and the Phenol-Explorer (PE) databases, respectively. During the 5-y follow-up period, 129 (12%) deaths were documented. Participants with high total flavonoid intake were at lower risk [multivariate-adjusted HR (95% CI)] of 5-y all-cause mortality than those with low total flavonoid consumption [total flavonoidUSDA: 0.37 (0.22, 0.58); total flavonoidPE: 0.36 (0.22, 0.60)]. Similar beneficial relations were observed for both cardiovascular disease mortality [total flavonoidUSDA: 0.34 (0.17, 0.69); flavonoidPE: 0.32 (0.16, 0.61)] and cancer mortality [total flavonoidUSDA: 0.25 (0.10, 0.62); flavonoidPE: 0.26 (0.11, 0.62)]. Using the most comprehensive flavonoid databases, we provide evidence that high consumption of flavonoids is associated with reduced risk of mortality in older women. The benefits of flavonoids may extend to the etiology of cancer and cardiovascular disease. © 2015 American Society for Nutrition.
Changes in the Proteome of Xylem Sap in Brassica oleracea in Response to Fusarium oxysporum Stress.
Pu, Zijing; Ino, Yoko; Kimura, Yayoi; Tago, Asumi; Shimizu, Motoki; Natsume, Satoshi; Sano, Yoshitaka; Fujimoto, Ryo; Kaneko, Kentaro; Shea, Daniel J; Fukai, Eigo; Fuji, Shin-Ichi; Hirano, Hisashi; Okazaki, Keiichi
2016-01-01
Fusarium oxysporum f.sp. conlutinans (Foc) is a serious root-invading and xylem-colonizing fungus that causes yellowing in Brassica oleracea. To comprehensively understand the interaction between F. oxysporum and B. oleracea, composition of the xylem sap proteome of the non-infected and Foc-infected plants was investigated in both resistant and susceptible cultivars using liquid chromatography-tandem mass spectrometry (LC-MS/MS) after in-solution digestion of xylem sap proteins. Whole genome sequencing of Foc was carried out and generated a predicted Foc protein database. The predicted Foc protein database was then combined with the public B. oleracea and B. rapa protein databases downloaded from Uniprot and used for protein identification. About 200 plant proteins were identified in the xylem sap of susceptible and resistant plants. Comparison between the non-infected and Foc-infected samples revealed that Foc infection causes changes to the protein composition in B. oleracea xylem sap where repressed proteins accounted for a greater proportion than those of induced in both the susceptible and resistant reactions. The analysis on the proteins with concentration change > = 2-fold indicated a large portion of up- and down-regulated proteins were those acting on carbohydrates. Proteins with leucine-rich repeats and legume lectin domains were mainly induced in both resistant and susceptible system, so was the case of thaumatins. Twenty-five Foc proteins were identified in the infected xylem sap and 10 of them were cysteine-containing secreted small proteins that are good candidates for virulence and/or avirulence effectors. The findings of differential response of protein contents in the xylem sap between the non-infected and Foc-infected samples as well as the Foc candidate effectors secreted in xylem provide valuable insights into B. oleracea-Foc interactions.
Gluten-free food database: the nutritional quality and cost of packaged gluten-free foods
Schwingshackl, Lukas; Billmann, Alina; Mystek, Aleksandra; Hickelsberger, Melanie; Bauer, Gregor; König, Jürgen
2015-01-01
Notwithstanding a growth in popularity and consumption of gluten-free (GF) food products, there is a lack of substantiated analysis of the nutritional quality compared with their gluten-containing counterparts. To put GF foods into proper perspective both for those who need it (patients with celiac disease) and for those who do not, we provide contemporary data about cost and nutritional quality of GF food products. The objective of this study is to develop a food composition database for seven discretionary food categories of packaged GF products. Nutrient composition, nutritional information and cost of foods from 63 GF and 126 gluten-containing counterparts were systematically obtained from 12 different Austrian supermarkets. The nutrition composition (macro and micronutrients) was analyzed by using two nutrient composition databases in a stepwise approximation process. A total of 63 packaged GF foods were included in the analysis representing a broad spectrum of different GF categories (flour/bake mix, bread and bakery products, pasta and cereal-based food, cereals, cookies and cakes, snacks and convenience food). Our results show that the protein content of GF products is >2 fold lower across 57% of all food categories. In 65% of all GF foods, low sodium content was observed (defined as <120 mg/100 g). Across all GF products, 19% can be classified as source high in fiber (defined as >6g/100 g). On average, GF foods were substantially higher in cost, ranging from +205% (cereals) to +267% (bread and bakery products) compared to similar gluten-containing products. In conclusion, our results indicate that for GF foods no predominant health benefits are indicated; in fact, some critical nutrients must be considered when being on a GF diet. For individuals with celiac disease, the GF database provides a helpful tool to identify the food composition of their medical diet. For healthy consumers, replacing gluten-containing products with GF foods is aligned with substantial cost differences but GF foods do not provide additional health benefits from a nutritional perspective. PMID:26528408
United States Army Medical Materiel Development Activity: 1997 Annual Report.
1997-01-01
business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was
Impact of Accurate 30-Day Status on Operative Mortality: Wanted Dead or Alive, Not Unknown.
Ring, W Steves; Edgerton, James R; Herbert, Morley; Prince, Syma; Knoff, Cathy; Jenkins, Kristin M; Jessen, Michael E; Hamman, Baron L
2017-12-01
Risk-adjusted operative mortality is the most important quality metric in cardiac surgery for determining The Society of Thoracic Surgeons (STS) Composite Score for star ratings. Accurate 30-day status is required to determine STS operative mortality. The goal of this study was to determine the effect of unknown or missing 30-day status on risk-adjusted operative mortality in a regional STS Adult Cardiac Surgery Database cooperative and demonstrate the ability to correct these deficiencies by matching with an administrative database. STS Adult Cardiac Surgery Database data were submitted by 27 hospitals from five hospital systems to the Texas Quality Initiative (TQI), a regional quality collaborative. TQI data were matched with a regional hospital claims database to resolve unknown 30-day status. The risk-adjusted operative mortality observed-to-expected (O/E) ratio was determined before and after matching to determine the effect of unknown status on the operative mortality O/E. TQI found an excessive (22%) unknown 30-day status for STS isolated coronary artery bypass grafting cases. Matching the TQI data to the administrative claims database reduced the unknowns to 7%. The STS process of imputing unknown 30-day status as alive underestimates the true operative mortality O/E (1.27 before vs 1.30 after match), while excluding unknowns overestimates the operative mortality O/E (1.57 before vs 1.37 after match) for isolated coronary artery bypass grafting. The current STS algorithm of imputing unknown 30-day status as alive and a strategy of excluding cases with unknown 30-day status both result in erroneous calculation of operative mortality and operative mortality O/E. However, external validation by matching with an administrative database can improve the accuracy of clinical databases such as the STS Adult Cardiac Surgery Database. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Thermodynamic assessment and binary nucleation modeling of Sn-seeded InGaAs nanowires
NASA Astrophysics Data System (ADS)
Ghasemi, Masoomeh; Selleby, Malin; Johansson, Jonas
2017-11-01
We have performed a thermodynamic assessment of the As-Ga-In-Sn system based on the CALculation of PHAse Diagram (CALPHAD) method. This system is part of a comprehensive thermodynamic database that we are developing for nanowire materials. Specifically, the As-Ga-In-Sn can be used in modeling the growth of GaAs, InAs, and InxGa1-xAs nanowires assisted by Sn liquid seeds. In this work, the As-Sn binary, the As-Ga-Sn, As-In-Sn, and Ga-In-Sn ternary systems have been thermodynamically assessed using the CALPHAD method. We show the relevant phase diagrams and property diagrams. They all show good agreement with experimental data. Using our optimized description we have modeled the nucleation of InxGa1-xAs in the zinc blende phase from a Sn-based quaternary liquid alloy using binary nucleation modeling. We have linked the composition of the solid nucleus to the composition of the liquid phase. Eventually, we have predicted the critical size of the nucleus that forms from InAs and GaAs pairs under various conditions. We believe that our modeling can guide future experimental realization of Sn-seeded InxGa1-xAs nanowires.
Performance assessment of EMR systems based on post-relational database.
Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji
2012-08-01
Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.
Fuel conditioning facility zone-to-zone transfer administrative controls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.
2000-06-21
The administrative controls associated with transferring containers from one criticality hazard control zone to another in the Argonne National Laboratory (ANL) Fuel Conditioning Facility (FCF) are described. FCF, located at the ANL-West site near Idaho Falls, Idaho, is used to remotely process spent sodium bonded metallic fuel for disposition. The process involves nearly forty widely varying material forms and types, over fifty specific use container types, and over thirty distinct zones where work activities occur. During 1999, over five thousand transfers from one zone to another were conducted. Limits are placed on mass, material form and type, and container typesmore » for each zone. Ml material and containers are tracked using the Mass Tracking System (MTG). The MTG uses an Oracle database and numerous applications to manage the database. The database stores information specific to the process, including material composition and mass, container identification number and mass, transfer history, and the operators involved in each transfer. The process is controlled using written procedures which specify the zone, containers, and material involved in a task. Transferring a container from one zone to another is called a zone-to-zone transfer (ZZT). ZZTs consist of four distinct phases, select, request, identify, and completion.« less
Comprehensive T-Matrix Reference Database: A 2007-2009 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2010-01-01
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
Introducing the GRACEnet/REAP Data Contribution, Discovery, and Retrieval System.
Del Grosso, S J; White, J W; Wilson, G; Vandenberg, B; Karlen, D L; Follett, R F; Johnson, J M F; Franzluebbers, A J; Archer, D W; Gollany, H T; Liebig, M A; Ascough, J; Reyes-Fox, M; Pellack, L; Starr, J; Barbour, N; Polumsky, R W; Gutwein, M; James, D
2013-07-01
Difficulties in accessing high-quality data on trace gas fluxes and performance of bioenergy/bioproduct feedstocks limit the ability of researchers and others to address environmental impacts of agriculture and the potential to produce feedstocks. To address those needs, the GRACEnet (Greenhouse gas Reduction through Agricultural Carbon Enhancement network) and REAP (Renewable Energy Assessment Project) research programs were initiated by the USDA Agricultural Research Service (ARS). A major product of these programs is the creation of a database with greenhouse gas fluxes, soil carbon stocks, biomass yield, nutrient, and energy characteristics, and input data for modeling cropped and grazed systems. The data include site descriptors (e.g., weather, soil class, spatial attributes), experimental design (e.g., factors manipulated, measurements performed, plot layouts), management information (e.g., planting and harvesting schedules, fertilizer types and amounts, biomass harvested, grazing intensity), and measurements (e.g., soil C and N stocks, plant biomass amount and chemical composition). To promote standardization of data and ensure that experiments were fully described, sampling protocols and a spreadsheet-based data-entry template were developed. Data were first uploaded to a temporary database for checking and then were uploaded to the central database. A Web-accessible application allows for registered users to query and download data including measurement protocols. Separate portals have been provided for each project (GRACEnet and REAP) at nrrc.ars.usda.gov/slgracenet/#/Home and nrrc.ars.usda.gov/slreap/#/Home. The database architecture and data entry template have proven flexible and robust for describing a wide range of field experiments and thus appear suitable for other natural resource research projects. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Technical Reports Server (NTRS)
Bohnhoff-Hlavacek, Gail
1993-01-01
The Long Duration Exposure Facility (LDEF) carried 57 experiments and 10,000 specimens for some 200 LDEF experiment investigators. The external surface of LDEF had a large variety of materials exposed to the space environment which were tested preflight, during flight, and post flight. Thermal blankets, optical materials, thermal control paints, aluminum, and composites are among the materials flown. The investigations have produced an abundance of analysis results. One of the responsibilities of the Boeing Support Contract, Materials and Systems Special Investigation Group, is to collate and compile that information into an organized fashion. The databases developed at Boeing to accomplish this task is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.
2015-02-10
In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less
Guardado Yordi, E; Matos, M J; Pérez Martínez, A; Tornes, A C; Santana, L; Molina, E; Uriarte, E
2017-08-01
Coumarins are a group of phytochemicals that may be beneficial or harmful to health depending on their type and dosage and the matrix that contains them. Some of these compounds have been proven to display pro-oxidant and clastogenic activities. Therefore, in the current work, we have studied the coumarins that are present in food sources extracted from the Phenol-Explorer database in order to predict their clastogenic activity and identify the structure-activity relationships and genotoxic structural alerts using alternative methods in the field of computational toxicology. It was necessary to compile information on the type and amount of coumarins in different food sources through the analysis of databases of food composition available online. A virtual screening using a clastogenic model and different software, such as MODESLAB, ChemDraw and STATISTIC, was performed. As a result, a table of food composition was prepared and qualitative information from this data was extracted. The virtual screening showed that the esterified substituents inactivate molecules, while the methoxyl and hydroxyl substituents contribute to their activity and constitute, together with the basic structures of the studied subclasses, clastogenic structural alerts. Chemical subclasses of simple coumarins and furocoumarins were classified as active (xanthotoxin, isopimpinellin, esculin, scopoletin, scopolin and bergapten). In silico genotoxicity was mainly predicted for coumarins found in beer, sherry, dried parsley, fresh parsley and raw celery stalks. The results obtained can be interesting for the future design of functional foods and dietary supplements. These studies constitute a reference for the genotoxic chemoinformatic analysis of bioactive compounds present in databases of food composition.
NASA Astrophysics Data System (ADS)
Ivanov, Stanislav; Kamzolkin, Vladimir; Konilov, Aleksandr; Aleshin, Igor
2014-05-01
There are many various methods of assessing the conditions of rocks formation based on determining the composition of the constituent minerals. Our objective was to create a universal tool for processing mineral's chemical analysis results and solving geothermobarometry problems by creating a database of existing sensors and providing a user-friendly standard interface. Similar computer assisted tools are based upon large collection of sensors (geothermometers and geobarometers) are known, for example, the project TPF (Konilov A.N., 1999) - text-based sensor collection tool written in PASCAL. The application contained more than 350 different sensors and has been used widely in petrochemical studies (see A.N. Konilov , A.A. Grafchikov, V.I. Fonarev 2010 for review). Our prototype uses the TPF project concept and is designed with modern application development techniques, which allows better flexibility. Main components of the designed system are 3 connected datasets: sensors collection (geothermometers, geobarometers, oxygen geobarometers, etc.), petrochemical data and modeling results. All data is maintained by special management and visualization tools and resides in sql database. System utilities allow user to import and export data in various file formats, edit records and plot graphs. Sensors database contains up to date collections of known methods. New sensors may be added by user. Measured database should be filled in by researcher. User friendly interface allows access to all available data and sensors, automates routine work, reduces the risk of common user mistakes and simplifies information exchange between research groups. We use prototype to evaluate peak pressure during the formation of garnet-amphibolite apoeclogites, gneisses and schists Blybsky metamorphic complex of the Front Range of the Northern Caucasus. In particular, our estimation of formation pressure range (18 ± 4 kbar) agrees on independent research results. The reported study was partially supported by RFBR, research project No. 14-05-00615.
NASA Astrophysics Data System (ADS)
Shchepashchenko, D.; Chave, J.; Phillips, O. L.; Davies, S. J.; Lewis, S. L.; Perger, C.; Dresel, C.; Fritz, S.; Scipal, K.
2017-12-01
Forest monitoring is high on the scientific and political agenda. Global measurements of forest height, biomass and how they change with time are urgently needed as essential climate and ecosystem variables. The Forest Observation System - FOS (http://forest-observation-system.net/) is an international cooperation to establish a global in-situ forest biomass database to support earth observation and to encourage investment in relevant field-based observations and science. FOS aims to link the Remote Sensing (RS) community with ecologists who measure forest biomass and estimating biodiversity in the field for a common benefit. The benefit of FOS for the RS community is the partnering of the most established teams and networks that manage permanent forest plots globally; to overcome data sharing issues and introduce a standard biomass data flow from tree level measurement to the plot level aggregation served in the most suitable form for the RS community. Ecologists benefit from the FOS with improved access to global biomass information, data standards, gap identification and potential improved funding opportunities to address the known gaps and deficiencies in the data. FOS closely collaborate with the Center for Tropical Forest Science -CTFS-ForestGEO, the ForestPlots.net (incl. RAINFOR, AfriTRON and T-FORCES), AusCover, Tropical managed Forests Observatory and the IIASA network. FOS is an open initiative with other networks and teams most welcome to join. The online database provides open access for both metadata (e.g. who conducted the measurements, where and which parameters) and actual data for a subset of plots where the authors have granted access. A minimum set of database values include: principal investigator and institution, plot coordinates, number of trees, forest type and tree species composition, wood density, canopy height and above ground biomass of trees. Plot size is 0.25 ha or large. The database will be essential for validating and calibrating satellite observations and various models.
Development of a Knowledge Base of Ti-Alloys From First-Principles and Thermodynamic Modeling
NASA Astrophysics Data System (ADS)
Marker, Cassie
An aging population with an active lifestyle requires the development of better load-bearing implants, which have high levels of biocompatibility and a low elastic modulus. Titanium alloys, in the body centered cubic phase, are great implant candidates, due to their mechanical properties and biocompatibility. The present work aims at investigating the thermodynamic and elastic properties of bcc Tialloys, using the integrated first-principles based on Density Functional Theory (DFT) and the CALculation of PHAse Diagrams (CALPHAD) method. The use of integrated first-principles calculations based on DFT and CALPHAD modeling has greatly reduced the need for trial and error metallurgy, which is ineffective and costly. The phase stability of Ti-alloys has been shown to greatly affect their elastic properties. Traditionally, CALPHAD modeling has been used to predict the equilibrium phase formation, but in the case of Ti-alloys, predicting the formation of two metastable phases o and alpha" is of great importance as these phases also drastically effect the elastic properties. To build a knowledge base of Ti-alloys, for biomedical load-bearing implants, the Ti-Mo-Nb-Sn-Ta-Zr system was studied because of the biocompatibility and the bcc stabilizing effects of some of the elements. With the focus on bcc Ti-rich alloys, a database of thermodynamic descriptions of each phase for the pure elements, binary and Ti-rich ternary alloys was developed in the present work. Previous thermodynamic descriptions for the pure elements were adopted from the widely used SGTE database for global compatibility. The previous binary and ternary models from the literature were evaluated for accuracy and new thermodynamic descriptions were developed when necessary. The models were evaluated using available experimental data, as well as the enthalpy of formation of the bcc phase obtained from first-principles calculations based on DFT. The thermodynamic descriptions were combined into a database ensuring that the sublattice models are compatible with each other. For subsystems, such as the Sn-Ta system, where no thermodynamic description had been evaluated and minimal experimental data was available, first-principles calculations based on DFT were used. The Sn-Ta system has two intermetallic phases, TaSn2 and Ta3Sn, with three solution phases: bcc, body centered tetragonal (bct) and diamond. First-principles calculations were completed on the intermetallic and solution phases. Special quasirandom structures (SQS) were used to obtain information about the solution phases across the entire composition range. The Debye-Gruneisen approach, as well as the quasiharmonic phonon method, were used to obtain the finite-temperature data. Results from the first-principles calculations and experiments were used to complete the thermodynamic description. The resulting phase diagram reproduced the first-principles calculations and experimental data accurately. In order to determine the effect of alloying on the elastic properties, first-principles calculations based on DFT were systematically done on the pure elements, five Ti-X binary systems and Ti-X-Y ternary systems (X ≠ Y = Mo, Nb, Sn, Ta Zr) in the bcc phase. The first-principles calculations predicted the single crystal elastic stiffness constants cij 's. Correspondingly, the polycrystalline aggregate properties were also estimated from the cij's, including bulk modulus B, shear modulus G and Young's modulus E. The calculated results showed good agreement with experimental results. The CALPHAD method was then adapted to assist in the database development of the elastic properties as a function of composition. On average, the database predicted the elastic properties of higher order Ti-alloys within 5 GPa of the experimental results. Finally, the formation of the metastable phases, o and alpha" was studied in the Ti-Ta and Ti-Nb systems. The formation energy of these phases, calculated from first-principles at 0 K, showed that the phases have similar formation energies to the bcc and hcp phases. Inelastic neutron scattering was completed on four different Ti-Nb compositions to study the entropy of the phases as well as the transformations occurring when the phases form and the phase fractions. Ongoing work is being done to use the experimental information to introduce thermodynamic descriptions for these two phases in the Ti-Nb system in order to be able to predict the formation and phase fractions. DFT based first-principles were used to predict the effect these phases have on the elastic properties and a rule of mixtures was used to determine the elastic properties of multi-phase alloys. The results were compared with experiments and showed that if the ongoing modeling can predict the phase fraction, the elastic database can accurately predict the elastic properties of the o and alpha" phases. This thesis provides a knowledge base of the thermodynamic and elastic properties of Ti-alloys from computational thermodynamics. The databases created will impact research activities on Ti-alloys and specifically efforts focused on Ti-alloys for biomedical applications.
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
NASA Technical Reports Server (NTRS)
Singh, M.
2011-01-01
During the last decades, a number of fiber reinforced ceramic composites have been developed and tested for various aerospace and ground based applications. However, a number of challenges still remain slowing the wide scale implementation of these materials. In addition to continuous fiber reinforced composites, other innovative materials have been developed including the fibrous monoliths and sintered fiber bonded ceramics. The sintered silicon carbide fiber bonded ceramics have been fabricated by the hot pressing and sintering of silicon carbide fibers. However, in this system reliable property database as well as various issues related to thermomechanical performance, integration, and fabrication of large and complex shape components has yet to be addressed. In this presentation, thermomechanical properties of sintered silicon carbide fiber bonded ceramics (as fabricated and joined) will be presented. In addition, critical need for manufacturing and integration technologies in successful implementation of these materials will be discussed.
NASA Astrophysics Data System (ADS)
De Rosa, Benedetto; Di Girolamo, Paolo; Summa, Donato; Stelitano, Dario; Mancini, Ignazio
2016-06-01
In November 2012 the University of BASILicata Raman Lidar system (BASIL) was approved to enter the International Network for the Detection of Atmospheric Composition Change (NDACC). This network includes more than 70 high-quality, remote-sensing research stations for observing and understanding the physical and chemical state of the upper troposphere and stratosphere and for assessing the impact of stratosphere changes on the underlying troposphere and on global climate. As part of this network, more than thirty groundbased Lidars deployed worldwide are routinely operated to monitor atmospheric ozone, temperature, aerosols, water vapour, and polar stratospheric clouds. In the frame of NDACC, BASIL performs measurements on a routine basis each Thursday, typically from local noon to midnight, covering a large portion of the daily cycle. Measurements from BASIL are included in the NDACC database both in terms of water vapour mixing ratio and temperature. This paper illustrates some measurement examples from BASIL, with a specific focus on water vapour measurements, with the goal to try and characterize the system performances.
Environment/Health/Safety (EHS): Databases
Hazard Documents Database Biosafety Authorization System CATS (Corrective Action Tracking System) (for findings 12/2005 to present) Chemical Management System Electrical Safety Ergonomics Database (for new Learned / Best Practices REMS - Radiation Exposure Monitoring System SJHA Database - Subcontractor Job
Preliminary study of a potential CO2 reservoir area in Hungary
NASA Astrophysics Data System (ADS)
Sendula, Eszter; Király, Csilla; Szabó, Zsuzsanna; Falus, György; Szabó, Csaba; Kovács, István; Füri, Judit; Kónya, Péter; Páles, Mariann; Forray, Viktória
2014-05-01
Since the first international agreement in 1997 (the Kyoto Protocol) the reduction of greenhouse gas emission has a key role in the European Union's energy and climate change policy. Following the Directive 2009/31/EC we are experiencing a significant change in the Hungarian national activity. Since the harmonization procedure, which was completed in May 2012, the national regulation obligates the competent authority to collect and regularly update all geological complexes that are potential for CO2 geological storage. In Hungary the most abundant potential storage formations are mostly saline aquifers of the Great Hungarian Plain (SE-Hungary), with sandstone reservoir and clayey caprock. The Neogene basin of the Great Hungarian Plain was subsided and then filled by a prograding delta system from NW and NE during the Late Miocene, mostly in the Pannonian time. The most potential storage rock was formed as a fine-grained sandy turbidite interlayered by thin argillaceous beds in the deepest part of the basin. It has relatively high porosity, depth and more than 1000 m thickness. Providing a regional coverage for the sandy turbidite, a 400-500 m thick argillaceous succession was formed in the slope environment. The composition, thickness and low permeability is expected to make it a suitable, leakage-safe caprock of the storage system. This succession is underlain by argillaceous rocks that were formed in the basin, far from sediment input and overlain by interfingering siltstone, sandstone and claystone succession formed in delta and shoreline environments and in the alluvial plain. Core samples have been collected from the potential reservoir rock and its cap rock in the Great Hungarian Plain's succession. The water compositions of the studied depth were known from well-log database. Using the information, acquired from these archive documents, we have constructed input data for geochemical modeling in order to to study the effect of pCO2 injection in the potential CO2 storage environment. From the potential reservoir rock samples (sandstone) thin sections were prepared to determine the mineral composition, pore distribution, pore geometry and grain size. The volume ratio of the minerals was calculated using pixel counter. To have more accurate mineral composition, petrographic observation and SEM analyzes have been carried out. The caprock samples involved in the study can be divided into mudstone and aleurolite samples. To determine the mineral composition of these samples, XRD, DTA, FTIR, SEM analysis has been carried out. To obtain a picture about the geochemical behavior of the potential CO2 storage system, geochemical models were made for the reservoir rocks. For the equilibrium geochemical model, PHREEQC 3.0 was used applying LLNL database. The data used in the model are real pore water compositions from the studied area and an average mineral composition based on petrographic microscope and SEM images. In the model we considered the cation-anion ratio (<10%) and the partial pressure of CO2. First of all, we were interested in the direction of the geochemical reactions during an injection process. Present work is focused on the mineralogy of the most potential storage rock and its caprock, and their expectable geochemical reactions for the effect of scCO2.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.
Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers
Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556
A Relational Database System for Student Use.
ERIC Educational Resources Information Center
Fertuck, Len
1982-01-01
Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)
Labay, Ben; Cohen, Adam E; Sissel, Blake; Hendrickson, Dean A; Martin, F Douglas; Sarkar, Sahotra
2011-01-01
Accurate establishment of baseline conditions is critical to successful management and habitat restoration. We demonstrate the ability to robustly estimate historical fish community composition and assess the current status of the urbanized Barton Creek watershed in central Texas, U.S.A. Fish species were surveyed in 2008 and the resulting data compared to three sources of fish occurrence information: (i) historical records from a museum specimen database and literature searches; (ii) a nearly identical survey conducted 15 years earlier; and (iii) a modeled historical community constructed with species distribution models (SDMs). This holistic approach, and especially the application of SDMs, allowed us to discover that the fish community in Barton Creek was more diverse than the historical data and survey methods alone indicated. Sixteen native species with high modeled probability of occurrence within the watershed were not found in the 2008 survey, seven of these were not found in either survey or in any of the historical collection records. Our approach allowed us to more rigorously establish the true baseline for the pre-development fish fauna and then to more accurately assess trends and develop hypotheses regarding factors driving current fish community composition to better inform management decisions and future restoration efforts. Smaller, urbanized freshwater systems, like Barton Creek, typically have a relatively poor historical biodiversity inventory coupled with long histories of alteration, and thus there is a propensity for land managers and researchers to apply inaccurate baseline standards. Our methods provide a way around that limitation by using SDMs derived from larger and richer biodiversity databases of a broader geographic scope. Broadly applied, we propose that this technique has potential to overcome limitations of popular bioassessment metrics (e.g., IBI) to become a versatile and robust management tool for determining status of freshwater biotic communities.
Morphology and microstructure of composite materials
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Srinivansan, K.
1991-01-01
Lightweight continuous carbon fiber based polymeric composites are currently enjoying increasing acceptance as structural materials capable of replacing metals and alloys in load bearing applications. As with most new materials, these composites are undergoing trials with several competing processing techniques aimed at cost effectively producing void free consolidations with good mechanical properties. As metallic materials have been in use for several centuries, a considerable database exists on their morphology - microstructure; and the interrelationships between structure and properties have been well documented. Numerous studies on composites have established the crucial relationship between microstructure - morphology and properties. The various microstructural and morphological features of composite materials, particularly those accompanying different processing routes, are documented.
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
Concordance of Commercial Data Sources for Neighborhood-Effects Studies
Schootman, Mario
2010-01-01
Growing evidence supports a relationship between neighborhood-level characteristics and important health outcomes. One source of neighborhood data includes commercial databases integrated with geographic information systems to measure availability of certain types of businesses or destinations that may have either favorable or adverse effects on health outcomes; however, the quality of these data sources is generally unknown. This study assessed the concordance of two commercial databases for ascertaining the presence, locations, and characteristics of businesses. Businesses in the St. Louis, Missouri area were selected based on their four-digit Standard Industrial Classification (SIC) codes and classified into 14 business categories. Business listings in the two commercial databases were matched by standardized business name within specified distances. Concordance and coverage measures were calculated using capture–recapture methods for all businesses and by business type, with further stratification by census-tract-level population density, percent below poverty, and racial composition. For matched listings, distance between listings and agreement in four-digit SIC code, sales volume, and employee size were calculated. Overall, the percent agreement was 32% between the databases. Concordance and coverage estimates were lowest for health-care facilities and leisure/entertainment businesses; highest for popular walking destinations, eating places, and alcohol/tobacco establishments; and varied somewhat by population density. The mean distance (SD) between matched listings was 108.2 (179.0) m with varying levels of agreement in four-digit SIC (percent agreement = 84.6%), employee size (weighted kappa = 0.63), and sales volume (weighted kappa = 0.04). Researchers should cautiously interpret findings when using these commercial databases to yield measures of the neighborhood environment. PMID:20480397
ERIC Educational Resources Information Center
Dalrymple, Prudence W.; Roderer, Nancy K.
1994-01-01
Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Excluded Parties Listing System (EPLS) databases into the System for Award Management (SAM) database. DATES... combined the functional capabilities of the CCR, ORCA, and EPLS procurement systems into the SAM database... identification number and the type of organization from the System for Award Management database. 0 3. Revise the...
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
Analysis of a Uranium Oxide Sample Interdicted in Slovakia (FSC 12-3-1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borg, Lars E.; Dai, Zurong; Eppich, Gary R.
2014-01-17
We provide a concise summary of analyses of a natural uranium sample seized in Slovakia in November 2007. Results are presented for compound identification, water content, U assay, trace element abundances, trace organic compounds, isotope compositions for U, Pb, Sr and O, and age determination using the 234U – 230Th and 235U – 231Pa chronometers. The sample is a mixture of two common uranium compounds - schoepite and uraninite. The uranium isotope composition is indistinguishable from natural; 236U was not detected. The O, Sr and Pb isotope compositions and trace element abundances are unremarkable. The 234U – 230Th chronometer givesmore » an age of 15.5 years relative to the date of analysis, indicating the sample was produced in January 1997. A comparison of the data for this sample with data in the Uranium Sourcing database failed to find a match, indicating the sample was not produced at a facility represented in the database.« less
Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe
2015-04-01
The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
Development and Operation of a Database Machine for Online Access and Update of a Large Database.
ERIC Educational Resources Information Center
Rush, James E.
1980-01-01
Reviews the development of a fault tolerant database processor system which replaced OCLC's conventional file system. A general introduction to database management systems and the operating environment is followed by a description of the hardware selection, software processes, and system characteristics. (SW)
75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... Facility Charge Database System for Air Carrier Reporting AGENCY: Federal Aviation Administration (FAA... the Passenger Facility Charge (PFC) database system to report PFC quarterly report information. In... developed a national PFC database system in order to more easily track the PFC program on a nationwide basis...
An Improved Database System for Program Assessment
ERIC Educational Resources Information Center
Haga, Wayne; Morris, Gerard; Morrell, Joseph S.
2011-01-01
This research paper presents a database management system for tracking course assessment data and reporting related outcomes for program assessment. It improves on a database system previously presented by the authors and in use for two years. The database system presented is specific to assessment for ABET (Accreditation Board for Engineering and…
76 FR 11465 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... separate systems of records: ``FHFA-OIG Audit Files Database,'' ``FHFA-OIG Investigative & Evaluative Files Database,'' ``FHFA-OIG Investigative & Evaluative MIS Database,'' and ``FHFA-OIG Hotline Database.'' These... Audit Files Database. FHFA-OIG-2: FHFA-OIG Investigative & Evaluative Files Database. FHFA-OIG-3: FHFA...
A management information system to study space diets
NASA Technical Reports Server (NTRS)
Kang, Sukwon; Both, A. J.; Janes, H. W. (Principal Investigator)
2002-01-01
A management information system (MIS), including a database management system (DBMS) and a decision support system (DSS), was developed to dynamically analyze the variable nutritional content of foods grown and prepared in an Advanced Life Support System (ALSS) such as required for long-duration space missions. The DBMS was designed around the known nutritional content of a list of candidate crops and their prepared foods. The DSS was designed to determine the composition of the daily crew diet based on crop and nutritional information stored in the DBMS. Each of the selected food items was assumed to be harvested from a yet-to-be designed ALSS biomass production subsystem and further prepared in accompanying food preparation subsystems. The developed DBMS allows for the analysis of the nutrient composition of a sample 20-day diet for future Advanced Life Support missions and is able to determine the required quantities of food needed to satisfy the crew's daily consumption. In addition, based on published crop growth rates, the DBMS was able to calculate the required size of the biomass production area needed to satisfy the daily food requirements for the crew. Results from this study can be used to help design future ALSS for which the integration of various subsystems (e.g., biomass production, food preparation and consumption, and waste processing) is paramount for the success of the mission.
A management information system to study space diets.
Kang, Sukwon; Both, A J
2002-01-01
A management information system (MIS), including a database management system (DBMS) and a decision support system (DSS), was developed to dynamically analyze the variable nutritional content of foods grown and prepared in an Advanced Life Support System (ALSS) such as required for long-duration space missions. The DBMS was designed around the known nutritional content of a list of candidate crops and their prepared foods. The DSS was designed to determine the composition of the daily crew diet based on crop and nutritional information stored in the DBMS. Each of the selected food items was assumed to be harvested from a yet-to-be designed ALSS biomass production subsystem and further prepared in accompanying food preparation subsystems. The developed DBMS allows for the analysis of the nutrient composition of a sample 20-day diet for future Advanced Life Support missions and is able to determine the required quantities of food needed to satisfy the crew's daily consumption. In addition, based on published crop growth rates, the DBMS was able to calculate the required size of the biomass production area needed to satisfy the daily food requirements for the crew. Results from this study can be used to help design future ALSS for which the integration of various subsystems (e.g., biomass production, food preparation and consumption, and waste processing) is paramount for the success of the mission.
Antibacterial agents in composite restorations for the prevention of dental caries.
Pereira-Cenci, Tatiana; Cenci, Maximiliano S; Fedorowicz, Zbys; Azevedo, Marina
2013-12-17
Dental caries is a multifactorial disease in which the fermentation of food sugars by bacteria from the biofilm (dental plaque) leads to localised demineralisation of tooth surfaces, which may ultimately result in cavity formation. Resin composites are widely used in dentistry to restore teeth. These restorations can fail for a number of reasons, such as secondary caries, and restorative material fracture and other minor reasons. From these, secondary caries, which are caries lesions developed adjacent to restorations, is the main cause for restorations replacement. The presence of antibacterials in both the filling material and the bonding systems would theoretically be able to affect the initiation and progression of caries adjacent to restorations. This is an update of the Cochrane review published in 2009. To assess the effects of antibacterial agents incorporated into composite restorations for the prevention of dental caries. We searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 23 July 2013), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 6), MEDLINE via OVID (1946 to 23 July 2013) and EMBASE via OVID (1980 to 23 July 2013). We searched the US National Institutes of Health Trials Register (http://clinicaltrials.gov), the metaRegister of Controlled Trials (www.controlled-trials.com) and the World Health Organization International Clinical Trials Registry platform (www.who.int/trialsearch) for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. Randomised controlled trials comparing resin composite restorations containing antibacterial agents with composite restorations not containing antibacterial agents. Two review authors conducted screening of studies in duplicate and independently, and although no eligible trials were identified, the two authors had planned to extract data independently and assess trial quality using standard Cochrane Collaboration methodologies. We retrieved 308 references to studies, none of which matched the inclusion criteria for this review and all of which were excluded. We were unable to identify any randomised controlled trials on the effects of antibacterial agents incorporated into composite restorations for the prevention of dental caries. The absence of high level evidence for the effectiveness of this intervention emphasises the need for well designed, adequately powered, randomised controlled clinical trials. Thus, conclusions remain the same as the previously published review, with no included clinical trials.
Environmental impact of PV cell waste scenario.
Bogacka, M; Pikoń, K; Landrat, M
2017-12-01
Rapid growth of the volume of waste from PV cells is expected in the following years. The problem of its utilization seems to be the most important issue for future waste management systems. The environmental impacts of the PV recycling scenario are presented in the manuscript. The analysis is based on the LCA approach and the average data available in specialized databases for silicon standard PV cell is used. The functional unit includes parameters like: efficiency, composition, surface area. The discussion on the environmental impact change due to the location of the PV production and waste processing plants is presented in the manuscript. Additionally, the discussion on the environmental effect of substituting different energy resources with PV cells is presented in the manuscript. The analysis of the PV cell life cycle scenario presented in the article was performed using the SIMA PRO software and data from Ecoinvent 3.0 database together with additional data obtained from other sources. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bedford, David R.
2003-01-01
This geologic map database describes geologic materials for the Kelso 7.5 Minute Quadrangle, San Bernardino County, California. The area lies in eastern Mojave Desert of California, within the Mojave National Preserve (a unit of the National Parks system). Geologic deposits in the area consist of Proterozoic metamorphic rocks, Cambrian-Neoproterozoic sedimentary rocks, Mesozoic plutonic and hypabyssal rocks, Tertiary basin fill, and Quaternary surficial deposits. Bedrock deposits are described by composition, texture, and stratigraphic relationships. Quaternary surficial deposits are classified into soil-geomorphic surfaces based on soil characteristics, inset relationships, and geomorphic expression. The surficial geology presented in this report is especially useful to understand, and extrapolate, physical properties that influence surface conditions, and surface- and soil-water dynamics. Physical characteristics such as pavement development, soil horizonation, and hydraulic characteristics have shown to be some of the primary drivers of ecologic dynamics, including recovery of those ecosystems to anthropogenic disturbance, in the eastern Mojave Desert and other arid and semi-arid environments.
CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.
Hallin, Peter F; Ussery, David W
2004-12-12
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.
Active in-database processing to support ambient assisted living systems.
de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas
2014-08-12
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.
Improved Information Retrieval Performance on SQL Database Using Data Adapter
NASA Astrophysics Data System (ADS)
Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.
2018-02-01
The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.
Active In-Database Processing to Support Ambient Assisted Living Systems
de Morais, Wagner O.; Lundström, Jens; Wickström, Nicholas
2014-01-01
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare. PMID:25120164
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Sarah. Jovan
2012-01-01
The Forest Inventory and Analysis (FIA) Program's Lichen Communities Indicator is used for tracking epiphytic macrolichen diversity and is applied for monitoring air quality and climate change effects on forest health in the United States. Started in 1994, the Epiphytic Macrolichen Community Composition Database (GIVD ID NA-US-012) now has over 8,000 surveys of...
Composite Structures Damage Tolerance Analysis Methodologies
NASA Technical Reports Server (NTRS)
Chang, James B.; Goyal, Vinay K.; Klug, John C.; Rome, Jacob I.
2012-01-01
This report presents the results of a literature review as part of the development of composite hardware fracture control guidelines funded by NASA Engineering and Safety Center (NESC) under contract NNL04AA09B. The objectives of the overall development tasks are to provide a broad information and database to the designers, analysts, and testing personnel who are engaged in space flight hardware production.
78 FR 2363 - Notification of Deletion of a System of Records; Automated Trust Funds Database
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... Database AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice of deletion of a system... establishing the Automated Trust Funds (ATF) database system of records. The Federal Information Security... Integrity Act of 1982, Public Law 97-255, provided authority for the system. The ATF database has been...
Ab Initio Design of Potent Anti-MRSA Peptides based on Database Filtering Technology
Mishra, Biswajit; Wang, Guangshun
2012-01-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed.1 This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g. amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database2 by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 minutes. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. A combination of our ab initio design with database screening3 led to yet another peptide with enhanced potency. Because of simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well. PMID:22803960
Ab initio design of potent anti-MRSA peptides based on database filtering technology.
Mishra, Biswajit; Wang, Guangshun
2012-08-01
To meet the challenge of antibiotic resistance worldwide, a new generation of antimicrobials must be developed. This communication demonstrates ab initio design of potent peptides against methicillin-resistant Staphylococcus aureus (MRSA). Our idea is that the peptide is very likely to be active when the most probable parameters are utilized in each step of the design. We derived the most probable parameters (e.g., amino acid composition, peptide hydrophobic content, and net charge) from the antimicrobial peptide database by developing a database filtering technology (DFT). Different from classic cationic antimicrobial peptides usually with high cationicity, DFTamP1, the first anti-MRSA peptide designed using this technology, is a short peptide with high hydrophobicity but low cationicity. Such a molecular design made the peptide highly potent. Indeed, the peptide caused bacterial surface damage and killed community-associated MRSA USA300 in 60 min. Structural determination of DFTamP1 by NMR spectroscopy revealed a broad hydrophobic surface, providing a basis for its potency against MRSA known to deploy positively charged moieties on the surface as a mechanism for resistance. Our ab initio design combined with database screening led to yet another peptide with enhanced potency. Because of the simple composition, short length, stability to proteases, and membrane targeting, the designed peptides are attractive leads for developing novel anti-MRSA therapeutics. Our database-derived design concept can be applied to the design of peptide mimicries to combat MRSA as well.
NASA Astrophysics Data System (ADS)
Dolotovskii, I. V.; Dolotovskaya, N. V.; Larin, E. A.
2018-05-01
The article presents the architecture and content of a specialized analytical system for monitoring operational conditions, planning of consumption and generation of energy resources, long-term planning of production activities and development of a strategy for the development of the energy complex of gas processing enterprises. A compositional model of structured data on the equipment of the main systems of the power complex is proposed. The correctness of the use of software modules and the database of the analytical system is confirmed by comparing the results of measurements on the equipment of the electric power system and simulation at the operating gas processing plant. A high accuracy in the planning of consumption of fuel and energy resources has been achieved (the error does not exceed 1%). Information and program modules of the analytical system allow us to develop a strategy for improving the energy complex in the face of changing technological topology and partial uncertainty of economic factors.
A manufacturing database of advanced materials used in spacecraft structures
NASA Technical Reports Server (NTRS)
Bao, Han P.
1994-01-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved. The most likely application for MCF is in cost modeling for trade studies. On-going work is being pursued to expand the potential applications of MCF.
A manufacturing database of advanced materials used in spacecraft structures
NASA Astrophysics Data System (ADS)
Bao, Han P.
1994-12-01
Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer aware of some of the most important aspects of manufacturing associated with his/her choice of the structural materials. The other objective of this study is to propose a quantitative method to determine a Manufacturing Complexity Factor (MCF) for each material being contemplated. This MCF is derived on the basis of the six cost drivers mentioned above plus a Technology Readiness Factor which is very closely related to the Technology Readiness Level (TRL) as defined in the Access To Space final report. Short of any manufacturing information, our MCF is equivalent to the inverse of TRL. As more manufacturing information is available, our MCF is a better representation (than TRL) of the fabrication processes involved.
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
The Data Base and Decision Making in Public Schools.
ERIC Educational Resources Information Center
Hedges, William D.
1984-01-01
Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Finglas, Paul M.; Berry, Rachel; Astley, Siân
2014-01-01
Food composition databases (FCDBs) form an integral part of nutrition and health research, patient treatment, manufacturing processes, and consumer information. FCDBs have traditionally been compiled at a national level; therefore, until recently, there was limited standardization of procedures across different data sets. Digital technologies now allow FCDB users to access a variety of information from different sources, which has emphasized the need for greater harmonization. The European Food Information Resource (EuroFIR) Network of Excellence and Nexus projects (2005–2013) has been instrumental in addressing differences in FCDBs and in producing standardized protocols and quality schemes to compile and manage them. A formal, recognized European standard for food composition data has been prepared, which will further assist in the production of comparable data. Quality schemes need to address both the composition data, plus the methods of sampling, analysis, and calculation, and the documentation of processes. The EuroFIR data exchange platform provides a wealth of resources for composition compilers and end users and continues to develop new and innovative tools and methodologies. EuroFIR also is working in collaboration with the European Food Safety Authority, and as a partner in several European projects. Through such collaborations, EuroFIR will continue to develop FCDB harmonization and to use new technologies to ensure sustainable future initiatives in the food composition activities that underpin food and health research in Europe. PMID:25469406
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
Dereymaeker, Anneleen; Ansari, Amir H; Jansen, Katrien; Cherian, Perumpillichira J; Vervisch, Jan; Govaert, Paul; De Wispelaere, Leen; Dielman, Charlotte; Matic, Vladimir; Dorado, Alexander Caicedo; De Vos, Maarten; Van Huffel, Sabine; Naulaers, Gunnar
2017-09-01
To assess interrater agreement based on majority voting in visual scoring of neonatal seizures. An online platform was designed based on a multicentre seizure EEG-database. Consensus decision based on 'majority voting' and interrater agreement was estimated using Fleiss' Kappa. The influences of different factors on agreement were determined. 1919 Events extracted from 280h EEG of 71 neonates were reviewed by 4 raters. Majority voting was applied to assign a seizure/non-seizure classification. 44% of events were classified with high, 36% with moderate, and 20% with poor agreement, resulting in a Kappa value of 0.39. 68% of events were labelled as seizures, and in 46%, all raters were convinced about electrographic seizures. The most common seizure duration was <30s. Raters agreed best for seizures lasting 60-120s. There was a significant difference in electrographic characteristics of seizures versus dubious events, with seizures having longer duration, higher power and amplitude. There is a wide variability in identifying rhythmic ictal and non-ictal EEG events, and only the most robust ictal patterns are consistently agreed upon. Database composition and electrographic characteristics are important factors that influence interrater agreement. The use of well-described databases and input of different experts will improve neonatal EEG interpretation and help to develop uniform seizure definitions, useful for evidence-based studies of seizure recognition and management. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Predicting Novel Bulk Metallic Glasses via High- Throughput Calculations
NASA Astrophysics Data System (ADS)
Perim, E.; Lee, D.; Liu, Y.; Toher, C.; Gong, P.; Li, Y.; Simmons, W. N.; Levy, O.; Vlassak, J.; Schroers, J.; Curtarolo, S.
Bulk metallic glasses (BMGs) are materials which may combine key properties from crystalline metals, such as high hardness, with others typically presented by plastics, such as easy processability. However, the cost of the known BMGs poses a significant obstacle for the development of applications, which has lead to a long search for novel, economically viable, BMGs. The emergence of high-throughput DFT calculations, such as the library provided by the AFLOWLIB consortium, has provided new tools for materials discovery. We have used this data to develop a new glass forming descriptor combining structural factors with thermodynamics in order to quickly screen through a large number of alloy systems in the AFLOWLIB database, identifying the most promising systems and the optimal compositions for glass formation. National Science Foundation (DMR-1436151, DMR-1435820, DMR-1436268).
Lamy, Brigitte; Kodjo, Angeli; Laurent, Frédéric
2011-09-01
We evaluated the accuracy of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for identifying aeromonads with an extraction procedure. Genus-level accuracy was 100%. Compared to rpoB gene sequencing, species-level accuracy was 90.6% (29/32) for type and reference strains and 91.4% for a collection of 139 clinical and environmental isolates, making this system one of the most accurate and rapid methods for phenotypic identification. The reliability of this technique was very promising, although some improvements in database composition, taxonomy, and discriminatory power are needed. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Peddie, Catherine
2001-01-01
Aircraft emissions are deposited throughout the atmosphere, and at the lower stratosphere and upper troposphere they have greater potential to change ozone abundance and affect climate. There are significant uncertainties arising from the incomplete knowledge of the composition and evolution of the exhaust emissions, particularly regarding reactive trace species, particles, and their gaseous precursors. NASA Glenn Research Center at Lewis Field has considered its role in answering these challenges and has been committed to strengthening its aerosol/particulate research capabilities with initial emphasis on establishing advanced measurement systems and a particulate database. Activities currently supported by the NASA Ultra-Efficient Engine Technology (UEET) Program and accomplishment up to date will be described.
NASA Astrophysics Data System (ADS)
Sauzède, R.; Lavigne, H.; Claustre, H.; Uitz, J.; Schmechtig, C.; D'Ortenzio, F.; Guinet, C.; Pesant, S.
2015-10-01
In vivo chlorophyll a fluorescence is a proxy of chlorophyll a concentration, and is one of the most frequently measured biogeochemical properties in the ocean. Thousands of profiles are available from historical databases and the integration of fluorescence sensors to autonomous platforms has led to a significant increase of chlorophyll fluorescence profile acquisition. To our knowledge, this important source of environmental data has not yet been included in global analyses. A total of 268 127 chlorophyll fluorescence profiles from several databases as well as published and unpublished individual sources were compiled. Following a robust quality control procedure detailed in the present paper, about 49 000 chlorophyll fluorescence profiles were converted into phytoplankton biomass (i.e., chlorophyll a concentration) and size-based community composition (i.e., microphytoplankton, nanophytoplankton and picophytoplankton), using a method specifically developed to harmonize fluorescence profiles from diverse sources. The data span over 5 decades from 1958 to 2015, including observations from all major oceanic basins and all seasons, and depths ranging from the surface to a median maximum sampling depth of around 700 m. Global maps of chlorophyll a concentration and phytoplankton community composition are presented here for the first time. Monthly climatologies were computed for three of Longhurst's ecological provinces in order to exemplify the potential use of the data product. Original data sets (raw fluorescence profiles) as well as calibrated profiles of phytoplankton biomass and community composition are available on open access at PANGAEA, Data Publisher for Earth and Environmental Science. Raw fluorescence profiles: http://doi.pangaea.de/10.1594/PANGAEA.844212 and Phytoplankton biomass and community composition: http://doi.pangaea.de/10.1594/PANGAEA.844485
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
Computer Science Research in Europe.
1984-08-29
most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the
Database systems for knowledge-based discovery.
Jagarlapudi, Sarma A R P; Kishan, K V Radha
2009-01-01
Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.
Global distribution of minerals in arid soils as lower boundary condition in dust models
NASA Astrophysics Data System (ADS)
Nickovic, Slobodan
2010-05-01
Mineral dust eroded from arid soils affects the radiation budget of the Earth system, modifies ocean bioproductivity and influences human health. Dust aerosol is a complex mixture of minerals. Dust mineral composition has several potentially important impacts to environment and society. Iron and phosphorus embedded in mineral aerosol are essential for the primary marine productivity when dust deposits over the open ocean. Dust also acts as efficient agent for heterogeneous ice nucleation and this process is dependent on mineralogical structure of dust. Recent findings in medical geology indicate possible role of minerals to human health. In this study, a new 1-km global database was developed for several minerals (Illite, Kaolinite, Smectite, Calcite, Quartz, Feldspar, Hematite and Gypsum) embedded in clay and silt populations of arid soils. For the database generation, high-resolution data sets on soil textures, soil types and land cover was used. Tin addition to the selected minerals, phosphorus was also added whose geographical distribution was specified from compiled literature and data on soil types. The developed global database was used to specify sources of mineral fractions in the DREAM dust model and to simulate atmospheric paths of minerals and their potential impacts on marine biochemistry and tropospheric ice nucleation.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
A dedicated database system for handling multi-level data in systems biology.
Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens
2014-01-01
Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.
NASA Tech Briefs, February 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Tech Briefs are short announcements of innovations originating from research and development activities of the National Aeronautics and Space Administration. They emphasize information considered likely to be transferable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. Topics covered include: Measuring Low Concentrations of Liquid Water in Soil; The Mars Science Laboratory Touchdown Test Facility; Non-Contact Measurement of Density and Thickness Variation in Dielectric Materials; Compact Microwave Fourier Spectrum Analyzer; InP Heterojunction Bipolar Transistor Amplifiers to 255 GHz; Combinatorial Generation of Test Suites; In-Phase Power-Combined Frequency Tripler at 300 GHz; Electronic System for Preventing Airport Runway Incursions; Smaller but Fully Functional Backshell for Cable Connector; Glove-Box or Desktop Virtual-Reality System; Composite Layer Manufacturing with Fewer Interruptions; Improved Photoresist Coating for Making CNT Field Emitters; A Simplified Diagnostic Method for Elastomer Bond Durability; Complex Multifunctional Polymer/Carbon-Nanotube Composites; Very High Output Thermoelectric Devices Based on ITO Nanocomposites; Reducing Unsteady Loads on a Piggyback Miniature Submarine; Ultrasonic/Sonic Anchor; Grooved Fuel Rings for Nuclear Thermal Rocket Engines; Pulsed Operation of an Ion Accelerator; Autonomous Instrument Placement for Mars Exploration Rovers; Mission and Assets Database; TCP/IP Interface for the Satellite Orbit Analysis Program (SOAP); Trajectory Calculator for Finite-Radius Cutter on a Lathe; Integrated System Health Management Development Toolkit.
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
Kent, David M; Dahabreh, Issa J; Ruthazer, Robin; Furlan, Anthony J; Weimar, Christian; Serena, Joaquín; Meier, Bernhard; Mattle, Heinrich P; Di Angelantonio, Emanuele; Paciaroni, Maurizio; Schuchlenz, Herwig; Homma, Shunichi; Lutz, Jennifer S; Thaler, David E
2015-09-14
The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
The longevity of lava dome eruptions: analysis of the global DomeHaz database
NASA Astrophysics Data System (ADS)
Ogburn, S. E.; Wolpert, R.; Calder, E.; Pallister, J. S.; Wright, H. M. N.
2015-12-01
The likely duration of ongoing volcanic eruptions is a topic of great interest to volcanologists, volcano observatories, and communities near volcanoes. Lava dome forming eruptions can last from days to centuries, and can produce violent, difficult-to-forecast activity including vulcanian to plinian explosions and pyroclastic density currents. Periods of active dome extrusion are often interspersed with periods of relative quiescence, during which extrusion may slow or pause altogether, but persistent volcanic unrest continues. This contribution focuses on the durations of these longer-term unrest phases, hereafter eruptions, that include periods of both lava extrusion and quiescence. A new database of lava dome eruptions, DomeHaz, provides characteristics of 228 eruptions at 127 volcanoes; for which 177 have duration information. We find that while 78% of dome-forming eruptions do not continue for more than 5 years, the remainder can be very long-lived. The probability distributions of eruption durations are shown to be heavy-tailed and vary by magma composition. For this reason, eruption durations are modeled with generalized Pareto distributions whose governing parameters depend on each volcano's composition and eruption duration to date. Bayesian predictive distributions and associated uncertainties are presented for the remaining duration of ongoing eruptions of specified composition and duration to date. Forecasts of such natural events will always have large uncertainties, but the ability to quantify such uncertainty is key to effective communication with stakeholders and to mitigation of hazards. Projections are made for the remaining eruption durations of ongoing eruptions, including those at Soufrière Hills Volcano, Montserrat and Sinabung, Indonesia. This work provides a quantitative, transferable method and rationale on which to base long-term planning decisions for dome forming volcanoes of different compositions, regardless of the quality of an individual volcano's eruptive record, by leveraging a global database.
Mei, Juan; Zhao, Ji
2018-06-14
Presynaptic neurotoxins and postsynaptic neurotoxins are two important neurotoxins isolated from venoms of venomous animals and have been proven to be potential effective in neurosciences and pharmacology. With the number of toxin sequences appeared in the public databases, there was a need for developing a computational method for fast and accurate identification and classification of the novel presynaptic neurotoxins and postsynaptic neurotoxins in the large databases. In this study, the Multinomial Naive Bayes Classifier (MNBC) had been developed to discriminate the presynaptic neurotoxins and postsynaptic neurotoxins based on the different kinds of features. The Minimum Redundancy Maximum Relevance (MRMR) feature selection method was used for ranking 400 pseudo amino acid (PseAA) compositions and 50 top ranked PseAA compositions were selected for improving the prediction results. The motif features, 400 PseAA compositions and 50 PseAA compositions were combined together, and selected as the input parameters of MNBC. The best correlation coefficient (CC) value of 0.8213 was obtained when the prediction quality was evaluated by the jackknife test. It was anticipated that the algorithm presented in this study may become a useful tool for identification of presynaptic neurotoxin and postsynaptic neurotoxin sequences and may provide some useful help for in-depth investigation into the biological mechanism of presynaptic neurotoxins and postsynaptic neurotoxins. Copyright © 2018 Elsevier Ltd. All rights reserved.
Availability and utility of crop composition data.
Kitta, Kazumi
2013-09-04
The safety assessment of genetically modified (GM) crops is mandatory in many countries. Although the most important factor to take into account in these safety assessments is the primary effects of artificially introduced transgene-derived traits, possible unintended effects attributed to the insertion of transgenes must be carefully examined in parallel. However, foods are complex mixtures of compounds characterized by wide variations in composition and nutritional values. Food components are significantly affected by various factors such as cultivars and the cultivation environment including storage conditions after harvest, and it can thus be very difficult to detect potential adverse effects caused by the introduction of a transgene. A comparative approach focusing on the identification of differences between GM foods and their conventional counterparts has been performed to reveal potential safety issues and is considered the most appropriate strategy for the safety assessment of GM foods. This concept is widely shared by authorities in many countries. For the efficient safety assessment of GM crops, an easily accessible and wide-ranging compilation of crop composition data is required for use by researchers and regulatory agencies. Thus, we developed an Internet-accessible food composition database comprising key nutrients, antinutrients, endogenous toxicants, and physiologically active substances of staple crops such as rice and soybeans. The International Life Sciences Institute has also been addressing the same matter and has provided the public a crop composition database of soybeans, maize, and cotton.
An on-line database for human milk composition in China.
Yin, Shi-An; Yang, Zhen-Yu
2016-12-01
Understanding human milk composition is critical for setting nutrient recommended intakes (RNIs) for both infants and lactating women. However, nationwide human milk composition remains unavailable in China. Through cross-sectional study, human milk samples from 11 provinces in China were collected and their compositions were analyzed. Nutritional and health status of the lactating women and their infants were evaluated through questionnaire, physical examination and biochemical indicators. A total of 6,481 breast milk samples including colostrum (1,859), transitional milk (1,235) and mature milk (3,387) were collected. Contents of protein, fat, lactose, total solid and energy of more than 4,500 samples were analyzed using a human milk analyzer. About 2,000 samples were randomly selected for 24 mineral analyses. Free B-vitamins including thiamin, riboflavin, pyridoxal, pyridomine, pyridoxamine, nicotinamide, nicotinic acid, flavin adenine dinucleotide (FAD), biotin and pantothenic acid were analyzed in 1,800 samples. Amino acids (~800) and proteins (alpha-lactoalbumin, beta-casein, and lactoferrin) were analyzed. In addition, serum retinol and carotenoids, 25(OH)D, vitamin B-12, folic acid, ferritin and biochemical indicators (n=1,200 to 2,000) were analysed in the lactating women who provided the breast milk. Ongoing work: Fatty acids (C4-C24), fatsoluble vitamins and carotenoids, are on-going analysis. A regional breast milk compositional database is at an advanced stage of development in China with the intention that it be available on-line.
Generalized Database Management System Support for Numeric Database Environments.
ERIC Educational Resources Information Center
Dominick, Wayne D.; Weathers, Peggy G.
1982-01-01
This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
Database Systems. Course Three. Information Systems Curriculum.
ERIC Educational Resources Information Center
O'Neil, Sharon Lund; Everett, Donna R.
This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
A Content Markup Language for Data Services
NASA Astrophysics Data System (ADS)
Noviello, C.; Acampa, P.; Mango Furnari, M.
Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.
Pyroxene Homogenization and the Isotopic Systematics of Eucrites
NASA Technical Reports Server (NTRS)
Nyquist, L. E.; Bogard, D. D.
1996-01-01
The original Mg-Fe zoning of eucritic pyroxenes has in nearly all cases been partly homogenized, an observation that has been combined with other petrographic and compositional criteria to establish a scale of thermal "metamorphism" for eucrites. To evaluate hypotheses explaining development of conditions on the HED parent body (Vesta?) leading to pyroxene homogenization against their chronological implications, it is necessary to know whether pyroxene metamorphism was recorded in the isotopic systems. However, identifying the effects of the thermal metamorphism with specific effects in the isotopic systems has been difficult, due in part to a lack of correlated isotopic and mineralogical studies of the same eucrites. Furthermore, isotopic studies often place high demands on analytical capabilities, resulting in slow growth of the isotopic database. Additionally, some isotopic systems would not respond in a direct and sensitive way to pyroxene homogenization. Nevertheless, sufficient data exist to generalize some observations, and to identify directions of potentially fruitful investigations.
Thermodynamic assessment of the LiF-ThF4-PuF3-UF4 system
NASA Astrophysics Data System (ADS)
Capelli, E.; Beneš, O.; Konings, R. J. M.
2015-07-01
The LiF-ThF4-PuF3-UF4 system is the reference salt mixture considered for the Molten Salt Fast Reactor (MSFR) concept started with PuF3. In order to obtain the complete thermodynamic description of this quaternary system, two binary systems (ThF4-PuF3 and UF4-PuF3) and two ternary systems (LiF-ThF4-PuF3 and LiF-UF4-PuF3) have been assessed for the first time. The similarities between CeF3/PuF3 and ThF4/UF4 compounds have been taken into account for the presented optimization as well as in the experimental measurements performed, which have confirmed the temperatures predicted by the model. Moreover, the experimental results and the thermodynamic database developed have been used to identify potential compositions for the MSFR fuel and to evaluate the influence of partial substitution of ThF4 by UF4 in the salt.
Geologic Map and Map Database of Eastern Sonoma and Western Napa Counties, California
Graymer, R.W.; Brabb, E.E.; Jones, D.L.; Barnes, J.; Nicholson, R.S.; Stamski, R.E.
2007-01-01
Introduction This report contains a new 1:100,000-scale geologic map, derived from a set of geologic map databases (Arc-Info coverages) containing information at 1:62,500-scale resolution, and a new description of the geologic map units and structural relations in the map area. Prepared as part of the San Francisco Bay Region Mapping Project, the study area includes the north-central part of the San Francisco Bay region, and forms the final piece of the effort to generate new, digital geologic maps and map databases for an area which includes Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma Counties. Geologic mapping in Lake County in the north-central part of the map extent was not within the scope of the Project. The map and map database integrates both previously published reports and new geologic mapping and field checking by the authors (see Sources of Data index map on the map sheet or the Arc-Info coverage eswn-so and the textfile eswn-so.txt). This report contains new ideas about the geologic structures in the map area, including the active San Andreas Fault system, as well as the geologic units and their relations. Together, the map (or map database) and the unit descriptions in this report describe the composition, distribution, and orientation of geologic materials and structures within the study area at regional scale. Regional geologic information is important for analysis of earthquake shaking, liquifaction susceptibility, landslide susceptibility, engineering materials properties, mineral resources and hazards, as well as groundwater resources and hazards. These data also assist in answering questions about the geologic history and development of the California Coast Ranges.
Pacific walrus coastal haulout database, 1852-2016— Background report
Fischbach, Anthony S.; Kochnev, Anatoly A.; Garlich-Miller, Joel L.; Jay, Chadwick V.
2016-01-01
Walruses are large benthic predators that rest out of water between foraging bouts. Coastal “haulouts” (places where walruses rest) are formed by adult males in summer and sometimes by females and young when sea ice is absent, and are often used repeatedly across seasons and years. Understanding the geography and historical use of haulouts provides a context for conservation efforts. We summarize information on Pacific walrus haulouts from available reports (n =151), interviews with coastal residents and aviators, and personal observations of the authors. We provide this in the form of a georeferenced database that can be queried and displayed with standard geographic information system and database management software. The database contains 150 records of Pacific walrus haulouts, with a summary of basic characteristics on maximum haulout aggregation size, age-sex composition, season of use, and decade of most recent use. Citations to reports are provided in the appendix and as a bibliographic database. Haulouts were distributed across the coasts of the Pacific walrus range; however, the largest (maximum >10,000 walruses) of the haulouts reported in the recent 4 decades (n=19) were concentrated on the Russian shores in regions near the Bering Strait and northward into the western Chukchi Sea (n=17). Haulouts of adult female and young walruses primarily occurred in the Bering Strait region and areas northward, with others occurring in the central Bering Sea, Gulf of Anadyr, and Saint Lawrence Island regions. The Gulf of Anadyr was the only region to contain female and young walrus haulouts, which formed after the northward spring migration and prior to autumn ice formation.
Applications of Database Machines in Library Systems.
ERIC Educational Resources Information Center
Salmon, Stephen R.
1984-01-01
Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H
2010-01-01
The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.
Microcomputer Database Management Systems for Bibliographic Data.
ERIC Educational Resources Information Center
Pollard, Richard
1986-01-01
Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)
PrionScan: an online database of predicted prion domains in complete proteomes.
Espinosa Angarica, Vladimir; Angulo, Alfonso; Giner, Arturo; Losilla, Guillermo; Ventura, Salvador; Sancho, Javier
2014-02-05
Prions are a particular type of amyloids related to a large variety of important processes in cells, but also responsible for serious diseases in mammals and humans. The number of experimentally characterized prions is still low and corresponds to a handful of examples in microorganisms and mammals. Prion aggregation is mediated by specific protein domains with a remarkable compositional bias towards glutamine/asparagine and against charged residues and prolines. These compositional features have been used to predict new prion proteins in the genomes of different organisms. Despite these efforts, there are only a few available data sources containing prion predictions at a genomic scale. Here we present PrionScan, a new database of predicted prion-like domains in complete proteomes. We have previously developed a predictive methodology to identify and score prionogenic stretches in protein sequences. In the present work, we exploit this approach to scan all the protein sequences in public databases and compile a repository containing relevant information of proteins bearing prion-like domains. The database is updated regularly alongside UniprotKB and in its present version contains approximately 28000 predictions in proteins from different functional categories in more than 3200 organisms from all the taxonomic subdivisions. PrionScan can be used in two different ways: database query and analysis of protein sequences submitted by the users. In the first mode, simple queries allow to retrieve a detailed description of the properties of a defined protein. Queries can also be combined to generate more complex and specific searching patterns. In the second mode, users can submit and analyze their own sequences. It is expected that this database would provide relevant insights on prion functions and regulation from a genome-wide perspective, allowing researches performing cross-species prion biology studies. Our database might also be useful for guiding experimentalists in the identification of new candidates for further experimental characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
NASA Astrophysics Data System (ADS)
Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.
2017-12-01
Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.
An automated system for terrain database construction
NASA Technical Reports Server (NTRS)
Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.
1987-01-01
An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.
Manheim, Frank T.; Lane-Bostwick, Candice M.
1989-01-01
A comprehensive database of chemical and mineralogical properties for ferromanganese crusts collected throughout the Atlantic, Pacific, and Indian Oceans, and has been assembled from published and unpublished sources which provide collection and analytical information for these samples. These crusts, their chemical compositions and natural distribution, have been a topic of interest to scientific research, as well as to industrial and military applications. Unlike abyssal ferromanganese nodules, which form in areas of low disturbance and high sediment accumulation, crusts have been found to contain three to five times more cobalt than these nodules, and can be found on harder, steeper substrates which can be too steep for permanent sediment accumulation. They have also been documented on seamounts and plateaus within the U.S. exclusive economic zone in both Pacific and Atlantic Oceans, and these are therefore of strategic importance to the United States Government, as well as to civilian mining and metallurgical industries. The data tables provided in this report have been digitized and previously uploaded to the National Oceanic and Atmospheric Administration National Geophysical Data Center in 1991 for online distribution, and were provided in plain text format. The 2014 update to the original U.S. Geological Survey open-file report published in 1989 provides these data tables in a slightly reformatted version to make them easier to ingest into geographic information system software, converted to shapefiles, and have completed metadata written and associated with them.
77 FR 24925 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...
Functional integration of automated system databases by means of artificial intelligence
NASA Astrophysics Data System (ADS)
Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul
2017-08-01
The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.
NASA Astrophysics Data System (ADS)
Kang, Youn-Bae; Jung, In-Ho
2017-06-01
A critical evaluation and thermodynamic modeling for thermodynamic properties of all oxide phases and phase diagrams in the Fe-Mn-Si-O system (MnO-Mn2O3-SiO2 and FeO-Fe2O3-MnO-Mn2O3-SiO2 systems) are presented. Optimized Gibbs energy parameters for the thermodynamic models of the oxide phases were obtained which reproduce all available and reliable experimental data within error limits from 298 K (25°C) to above the liquidus temperatures at all compositions covering from known oxide phases, and oxygen partial pressure from metal saturation to 0.21 bar. The optimized thermodynamic properties and phase diagrams are believed to be the best estimates presently available. Slag (molten oxide) was modeled using the modified quasichemical model in the pair approximation. Olivine (Fe2SiO4-Mn2SiO4) was modeled using two-sublattice model in the framework of the compound energy formalism (CEF), while rhodonite (MnSiO3-FeSiO3) and braunite (Mn7SiO_{12} with excess Mn2O3) were modeled as simple Henrian solutions. It is shown that the already developed models and databases of two spinel phases (cubic- and tetragonal-(Fe, Mn)3O4) using CEF [Kang and Jung, J. Phys. Chem. Solids (2016), vol. 98, pp. 237-246] can successfully be integrated into a larger thermodynamic database to be used in practically important higher order system such as silicate. The database of the model parameters can be used along with a software for Gibbs energy minimization in order to calculate any type of phase diagram section and thermodynamic properties.
Software Engineering Laboratory (SEL) database organization and user's guide, revision 2
NASA Technical Reports Server (NTRS)
Morusiewicz, Linda; Bristow, John
1992-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.
Software Engineering Laboratory (SEL) database organization and user's guide
NASA Technical Reports Server (NTRS)
So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas
1989-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.
A Review of the Composition of the Essential Oils and Biological Activities of Angelica Species.
Sowndhararajan, Kandasamy; Deepa, Ponnuvel; Kim, Minju; Park, Se Jin; Kim, Songmun
2017-09-20
A number of Angelica species have been used in traditional systems of medicine to treat many ailments. Especially, essential oils (EOs) from the Angelica species have been used for the treatment of various health problems, including malaria, gynecological diseases, fever, anemia, and arthritis. EOs are complex mixtures of low molecular weight compounds, especially terpenoids and their oxygenated compounds. These components deliver specific fragrance and biological properties to essential oils. In this review, we summarized the chemical composition and biological activities of EOs from different species of Angelica . For this purpose, a literature search was carried out to obtain information about the EOs of Angelica species and their bioactivities from electronic databases such as PubMed, Science Direct, Wiley, Springer, ACS, Google, and other journal publications. There has been a lot of variation in the EO composition among different Angelica species. EOs from Angelica species were reported for different kinds of biological activities, such as antioxidant, anti-inflammatory, antimicrobial, immunotoxic, and insecticidal activities. The present review is an attempt to consolidate the available data for different Angelica species on the basis of major constituents in the EOs and their biological activities.
Influence of condensed species on thermo-physical properties of LTE and non-LTE SF6-Cu mixture
NASA Astrophysics Data System (ADS)
Chen, Zhexin; Wu, Yi; Yang, Fei; Sun, Hao; Rong, Mingzhe; Wang, Chunlin
2017-10-01
SF6-Cu mixture is frequently formed in high-voltage circuit breakers due to the electrode erosion and metal vapor diffusion. During the interruption process, the multiphase effect and deviation from local thermal equilibrium (non-LTE assumption) can both affect the thermo-physical of the arc plasma and further influence the performance of circuit breaker. In this paper, thermo-physical properties, namely composition, thermodynamic properties and transport coefficients are calculated for multiphase SF6-Cu mixture with and without LTE assumption. The composition is confirmed by combining classical two-temperature mass action law with phase equilibrium condition deduced from second law of thermodynamics. The thermodynamic properties and transport coefficients are calculated using the multiphase composition result. The influence of condensed species on thermo-physical properties is discussed at different temperature, pressure (0.1-10 atm), non-equilibrium degrees (1-10), and copper molar proportions (0-50%). It is found that the multiphase effect has significant influence on specific enthalpy, specific heat and heavy species thermal conductivity in both LTE and non-LTE SF6-Cu system. This paper provides a more accurate database for computational fluid dynamic calculation.
REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures
National Institute of Standards and Technology Data Gateway
SRD 73 NIST REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures (PC database for purchase) REFLEAK estimates composition changes of zeotropic mixtures in leak and recharge processes.
NASA Astrophysics Data System (ADS)
Florea, R. M.
2017-06-01
Basic material concept, technology and some results of studies on aluminum matrix composite with dispersive aluminum nitride reinforcement was shown. Studied composites were manufactured by „in situ” technique. Aluminum nitride (AlN) has attracted large interest recently, because of its high thermal conductivity, good dielectric properties, high flexural strength, thermal expansion coefficient matches that of Si and its non-toxic nature, as a suitable material for hybrid integrated circuit substrates. AlMg alloys are the best matrix for AlN obtaining. Al2O3-AlMg, AlN-Al2O3, and AlN-AlMg binary diagrams were thermodynamically modelled. The obtained Gibbs free energies of components, solution parameters and stoichiometric phases were used to build a thermodynamic database of AlN- Al2O3-AlMg system. Obtaining of AlN with Liquid-phase of AlMg as matrix has been studied and compared with the thermodynamic results. The secondary phase microstructure has a significant effect on the final thermal conductivity of the obtained AlN. Thermodynamic modelling of AlN-Al2O3-AlMg system provided an important basis for understanding the obtaining behavior and interpreting the experimental results.
Geer, Lewis Y.; Marchler-Bauer, Aron; Geer, Renata C.; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H.
2010-01-01
The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI’s Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets. PMID:19854944
The database design of LAMOST based on MYSQL/LINUX
NASA Astrophysics Data System (ADS)
Li, Hui-Xian, Sang, Jian; Wang, Sha; Luo, A.-Li
2006-03-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) will be set up in the coming years. A fully automated software system for reducing and analyzing the spectra has to be developed with the telescope. This database system is an important part of the software system. The requirements for the database of the LAMOST, the design of the LAMOST database system based on MYSQL/LINUX and performance tests of this system are described in this paper.
An Introduction to Database Management Systems.
ERIC Educational Resources Information Center
Warden, William H., III; Warden, Bette M.
1984-01-01
Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…
Heterogeneous database integration in biomedicine.
Sujansky, W
2001-08-01
The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
Databases for the Global Dynamics of Multiparameter Nonlinear Systems
2014-03-05
AFRL-OSR-VA-TR-2014-0078 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS Konstantin Mischaikow RUTGERS THE STATE UNIVERSITY OF...University of New Jersey ASB III, Rutgers Plaza New Brunswick, NJ 08807 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS ...dynamical systems . We refer to the output as a Database for Global Dynamics since it allows the user to query for information about the existence and
Fajardo, Violeta; Alonso-Aperte, Elena; Varela-Moreiras, Gregorio
2015-02-15
Ready-to-eat foods have nowadays become a significant portion of the diet. Accordingly, nutritional composition of these food categories should be well-known, in particular its folate content. However, there is a broad lack of folate data in food composition tables and databases. A total of 21 fresh-cut vegetable and fruit packed products were analysed for total folate (TF) content using a validated method that relies on the folate-dependent growth of chloramphenicol-resistant Lactobacillus casei subspecies rhamnosus (NCIMB 10463). Mean TF content ranged from 10.0 to 140.9μg/100g for the different matrices on a fresh weight basis. Higher TF quantity, 140.9-70.1μg/100g, was found in spinach, rocket, watercress, chard and broccoli. Significant differences were observed between available data for fresh vegetables and fruits from food composition tables or databases and the analysed results for fresh-cut packed products. Supplied data support the potential of folate-rich fresh-cut ready-to-eat vegetables to increase folate intake significantly. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tariqul Islam Shajib, Mohammad; Kawser, Mahbuba; Nuruddin Miah, Mohammad; Begum, Parveen; Bhattacharjee, Lalita; Hossain, A; Fomsgaard, Inge S; Islam, Sheikh Nazrul
2013-10-01
In line of the development of a food composition database for Bangladesh, 10 minor indigenous fruits were analysed for their nutrient composition comprising ascorbic acid, carotenoids and mineral values. Nutrient data obtained have been compared with published data reported in different literatures, book and United States Department of Agriculture-National Nutrient Database for Standard Reference. Ascorbic acid was highest in Wood apple and lowest in Roselle. Monkey jack contained the highest amount of carotenoids, zinc and copper. Content of calcium, magnesium and phosphorous were found highest in Antidesma velutinum. Potassium was the highest in Wood apple followed by in Moneky jack. It was noted that most of the minor fruits have much higher amount of ascorbic acid than the national fruit - Jack fruit ripe, the king fruit - Mango ripe of Bangladesh and exotic fruits - Apple and Grapes. The nutrient values of these minor fruits would make awareness among the people for their mass consumption for healthy life and to grow more minor fruit trees from extinction in order to maintain biodiversity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-13
... information. Access to any such database system is limited to system administrators, individuals responsible... during the certification process. The above information will be contained in one or more databases (such as Lotus Notes) that reside on servers in EPA offices. The database(s) may be specific to one...
Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Food and Nutrition Information Center
... Nutrition Information Center Microgreen Study Shows Health Benefits •ARS scientists studied health benefits of red cabbage microgreens.• ... List USDA Food Composition Databases Agricultural Research Service (ARS) Food and Nutrition Research Briefs Interactive DRI for ...
Building Vietnamese Herbal Database Towards Big Data Science in Nature-Based Medicine
2018-01-04
metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species, 10887 metabolites...plants, metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species...feature description are extremely diverse and highly redundant. Besides the original words or the key words for description , there are millions of
WFIRST: Update on the Coronagraph Science Requirements
NASA Astrophysics Data System (ADS)
Douglas, Ewan S.; Cahoy, Kerri; Carlton, Ashley; Macintosh, Bruce; Turnbull, Margaret; Kasdin, Jeremy; WFIRST Coronagraph Science Investigation Teams
2018-01-01
The WFIRST Coronagraph instrument (CGI) will enable direct imaging and low resolution spectroscopy of exoplanets in reflected light and imaging polarimetry of circumstellar disks. The CGI science investigation teams were tasked with developing a set of science requirements which advance our knowledge of exoplanet occurrence and atmospheric composition, as well as the composition and morphology of exozodiacal debris disks, cold Kuiper Belt analogs, and protoplanetary systems. We present the initial content, rationales, validation, and verification plans for the WFIRST CGI, informed by detailed and still-evolving instrument and observatory performance models. We also discuss our approach to the requirements development and management process, including the collection and organization of science inputs, open source approach to managing the requirements database, and the range of models used for requirements validation. These tools can be applied to requirements development processes for other astrophysical space missions, and may ease their management and maintenance. These WFIRST CGI science requirements allow the community to learn about and provide insights and feedback on the expected instrument performance and science return.
Tensile Properties of Polymeric Matrix Composites Subjected to Cryogenic Environments
NASA Technical Reports Server (NTRS)
Whitley, Karen S.; Gates, Thomas S.
2004-01-01
Polymer matrix composites (PMC s) have seen limited use as structural materials in cryogenic environments. One reason for the limited use of PMC s in cryogenic structures is a design philosophy that typically requires a large, validated database of material properties in order to ensure a reliable and defect free structure. It is the intent of this paper to provide an initial set of mechanical properties developed from experimental data of an advanced PMC (IM7/PETI-5) exposed to cryogenic temperatures and mechanical loading. The application of this data is to assist in the materials down-select and design of cryogenic fuel tanks for future reusable space vehicles. The details of the material system, test program, and experimental methods will be outlined. Tension modulus and strength were measured at room temperature, -196 C, and -269 C on five different laminates. These properties were also tested after aging at -186 C with and without loading applied. Microcracking was observed in one laminate.
TRENDS: The aeronautical post-test database management system
NASA Technical Reports Server (NTRS)
Bjorkman, W. S.; Bondi, M. J.
1990-01-01
TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.
Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites
NASA Technical Reports Server (NTRS)
Gravett, Phillip
1997-01-01
The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.
NASA Technical Reports Server (NTRS)
2012-01-01
The topics include: 1) Spectral Profiler Probe for In Situ Snow Grain Size and Composition Stratigraphy; 2) Portable Fourier Transform Spectroscopy for Analysis of Surface Contamination and Quality Control; 3) In Situ Geochemical Analysis and Age Dating of Rocks Using Laser Ablation-Miniature Mass Spectrometer; 4) Physics Mining of Multi-Source Data Sets; 5) Photogrammetry Tool for Forensic Analysis; 6) Connect Global Positioning System RF Module; 7) Simple Cell Balance Circuit; 8) Miniature EVA Software Defined Radio; 9) Remotely Accessible Testbed for Software Defined Radio Development; 10) System-of-Systems Technology-Portfolio-Analysis Tool; 11) VESGEN Software for Mapping and Quantification of Vascular Regulators; 12) Constructing a Database From Multiple 2D Images for Camera Pose Estimation and Robot Localization; 13) Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology; 14) 3D Visualization for Phoenix Mars Lander Science Operations; 15) RxGen General Optical Model Prescription Generator; 16) Carbon Nanotube Bonding Strength Enhancement Using Metal Wicking Process; 17) Multi-Layer Far-Infrared Component Technology; 18) Germanium Lift-Off Masks for Thin Metal Film Patterning; 19) Sealing Materials for Use in Vacuum at High Temperatures; 20) Radiation Shielding System Using a Composite of Carbon Nanotubes Loaded With Electropolymers; 21) Nano Sponges for Drug Delivery and Medicinal Applications; 22) Molecular Technique to Understand Deep Microbial Diversity; 23) Methods and Compositions Based on Culturing Microorganisms in Low Sedimental Fluid Shear Conditions; 24) Secure Peer-to-Peer Networks for Scientific Information Sharing; 25) Multiplexer/Demultiplexer Loading Tool (MDMLT); 26) High-Rate Data-Capture for an Airborne Lidar System; 27) Wavefront Sensing Analysis of Grazing Incidence Optical Systems; 28) Foam-on-Tile Damage Model; 29) Instrument Package Manipulation Through the Generation and Use of an Attenuated-Fluent Gas Fold; 30) Multicolor Detectors for Ultrasensitive Long-Wave Imaging Cameras; 31) Lunar Reconnaissance Orbiter (LRO) Command and Data Handling Flight Electronics Subsystem; and 32) Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes.
Portuguese food composition database quality management system.
Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A
2010-11-01
The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers') comments, complaints and satisfaction, with minor adaptation. Implementation of the FCDB QMS proved to be a way of reducing the subjectivity of the compilation process and fully documenting it, and also facilitates training of new compilers. Furthermore, it has strengthened cooperation and trust among FCDB actors, as all of them were called to be involved in the process. On the basis of our practical results, we can conclude that ISO/IEC 17025 management requirements are an adequate reference for the implementation of INSA's FCDB QMS with the advantages of being well known to all members of staff and also being a common quality language among laboratories producing FCD. Combining quality systems and food composition activities endows the FCDB compilation process with flexibility, consistency and transparency, and facilitates its monitoring and assessment, providing the basis for strengthening confidence among users, data producers and compilers.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Fernandez-Caldas, Enrique; Cases, Barbara; Tudela, Jose Ignacio; Fernandez, Eva Abel; Casanovas, Miguel; Subiza, Jose Luis
2012-01-01
Background Allergoids have been successfully used in the treatment of respiratory allergic diseases. They are modified allergen extracts that allow the administration of high allergen doses, due to their reduced IgE binding capacity.They maintain allergen-specific T-cell recognition. Since they are native allergen extracts that have been polymerized with glutaraldehyde, identification of the allergenic molecules requires more complicated methods. The aim of the study was to determine the qualitative composition of different polymerized extracts and investigate the presence of defined allergenic molecules using Mass spectrometry. Methods Proteomic analysis was carried out at the Proteomics Facility of the Hospital Nacional de Parapléjicos (Toledo, Spain). After reduction and alkylation, proteins were digested with trypsin and the resulting peptides were cleaned using C18 SpinTips Sample Prep Kit; peptides were separated on an Ultimate nano-LC system using a Monolithic C18 column in combination with a precolumn for salt removal. Fractionation of the peptides was performed with a Probot microfraction collector and MS and MS/MS analysis of offline spotted peptide samples were performed using the Applied Biosystems 4800 plus MALDI TOF/TOF Analyzer mass spectrometer. ProteinPilot Software V 2.0.1 and the Paragon algorithm were used for the identification of the proteins. Each MS/MS spectrum was searched against the SwissProt 2010_10 database, Uniprot-Viridiplantae database and Uniprot_Betula database. Results Analysis of the peptides revealed the presence of native allergens in the polymerized extracts: Der p 1, Der p 2, Der p 3, Der p 8 and Der p 11 in D. pteronyssinus; Bet v 2, Bet v 6, Bet v 7 and several Bet v 1 isoforms in B. verrucosa and Phl p 1, Phl p 3, Phl p 5, Phl p 11 and Phl p 12 in P. pratense allergoids. In all cases, potential allergenic proteins were also identified, including ubiquitin, actin, Eenolase, fructose-bisphosphate aldolase, luminal-binding protein (Heat shock protein 70), calmodulin, among others. Conclusions The characterization of the allergenic composition of allergoids is possible using MS/MS analysis. The analysis confirms the presence of native allergens in the allergoids. Mayor allergens are preserved during polymerization.
NASA Astrophysics Data System (ADS)
Boulanger, D.; Thouret, V.
2016-12-01
IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft and do measurements of aerosols, cloud particles, greenhouse gases, ozone, water vapor and nitrogen oxides from the surface to the lower stratosphere. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core and IAGOS-CARIBIC data. The IAGOS Data Portal (http://www.iagos.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). In 2016 the new IAGOS Data Portal has been released. In addition to the data download the portal provides improved and new services such as download in NetCDF or NASA Ames formats and plotting tools (maps, time series, vertical profiles). New added value products are available through the portal: back trajectories, origin of air masses, co-location with satellite data. Web services allow to download IAGOS metadata such as flights and airports information. Administration tools have been implemented for users management and instruments monitoring. A major improvement is the interoperability with international portals and other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse, the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de) and the CAMS (Copernicus Atmosphere Monitoring Service) data center in Jülich (http://join.iek.fz-juelich.de). The link with the CAMS data center, through the JOIN interface, allows to combine model outputs with IAGOS data for inter-comparison. The CAMS project is a prominent user of the IGAS data network. Duting the next year IAGOS will improve metadata standardization and dissemination through different collaborations with the AERIS data center, GAW for which IAGOS is a contributing network and the ENVRI+ European project. Measurements traceability and quality metadata will be available and DOI will be implemented.
Background: Trends in male reproductive health have been reported for increased rates of testicular germ cell tumors, low semen quality, cryptorchidism, and hypospadias, which have been associated with prenatal environmental chemical exposure based on human and animal studies.Objective: In the present study we aimed to identify significant correlations between environmental chemicals, molecular targets, and adverse outcomes across a broad chemical landscape with emphasis on developmental toxicity of the male reproductive system.Methods: We used U.S. EPA??s animal study database (ToxRefDB) and a comprehensive literature analysis to identify 774 chemicals that have been evaluated for adverse effects on male reproductive parameters, and then used U.S. EPA??s in vitro high-throughput screening (HTS) database (ToxCastDB) to profile their bioactivity across approximately 800 molecular and cellular features. Results: A phenotypic hierarchy of testicular atrophy, sperm effects, tumors, and malformations, a composite resembling the human testicular dysgenesis syndrome (TDS) hypothesis, was observed in 281 chemicals. A subset of 54 chemicals with male developmental consequences had in vitro bioactivity on molecular targets that could be condensed into 156 gene annotations in a bipartite network. Conclusion: Computational modeling of available in vivo and in vitro data for chemicals that produce adverse effects on male reproductive end points revealed a phenotypic hierarch
XML: James Webb Space Telescope Database Issues, Lessons, and Status
NASA Technical Reports Server (NTRS)
Detter, Ryan; Mooney, Michael; Fatig, Curtis
2003-01-01
This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.
GIS-project: geodynamic globe for global monitoring of geological processes
NASA Astrophysics Data System (ADS)
Ryakhovsky, V.; Rundquist, D.; Gatinsky, Yu.; Chesalova, E.
2003-04-01
A multilayer geodynamic globe at the scale 1:10,000,000 was created at the end of the nineties in the GIS Center of the Vernadsky Museum. A special soft-and-hardware complex was elaborated for its visualization with a set of multitarget object directed databases. The globe includes separate thematic covers represented by digital sets of spatial geological, geochemical, and geophysical information (maps, schemes, profiles, stratigraphic columns, arranged databases etc.). At present the largest databases included in the globe program are connected with petrochemical and isotopic data on magmatic rocks of the World Ocean and with the large and supperlarge mineral deposits. Software by the Environmental Scientific Research Institute (ESRI), USA as well as ArcScan vectrorizator were used for covers digitizing and database adaptation (ARC/INFO 7.0, 8.0). All layers of the geoinformational project were obtained by scanning of separate objects and their transfer to the real geographic co-ordinates of an equiintermediate conic projection. Then the covers were projected on plane degree-system geographic co-ordinates. Some attributive databases were formed for each thematic layer, and in the last stage all covers were combined into the single information system. Separate digital covers represent mathematical descriptions of geological objects and relations between them, such as Earth's altimetry, active fault systems, seismicity etc. Some grounds of the cartographic generalization were taken into consideration in time of covers compilation with projection and co-ordinate systems precisely answered a given scale. The globe allows us to carry out in the interactive regime the formation of coordinated with each other object-oriented databases and thematic covers directly connected with them. They can be spread for all the Earth and the near-Earth space, and for the most well known parts of divergent and convergent boundaries of the lithosphere plates. Such covers and time series reflect in diagram form a total combination and dynamics of data on the geological structure, geophysical fields, seismicity, geomagnetism, composition of rock complexes, and metalloge-ny of different areas on the Earth's surface. They give us possibility to scale, detail, and develop 3D spatial visualization. Information filling the covers could be replenished as in the existing so in newly formed databases with new data. The integrated analyses of the data allows us more precisely to define our ideas on regularities in development of lithosphere and mantle unhomogeneities using some original technologies. It also enables us to work out 3D digital models for geodynamic development of tectonic zones in convergent and divergent plate boundaries with the purpose of integrated monitoring of mineral resources and establishing correlation between seismicity, magmatic activity, and metallogeny in time-spatial co-ordinates. The created multifold geoinformation system gives a chance to execute an integral analyses of geoinformation flows in the interactive regime and, in particular, to establish some regularities in the time-spatial distribution and dynamics of main structural units in the lithosphere, as well as illuminate the connection between stages of their development and epochs of large and supperlarge mineral deposit formation. Now we try to use the system for prediction of large oil and gas concentration in the main sedimentary basins. The work was supported by RFBR, (grants 93-07-14680, 96-07-89499, 99-07-90030, 00-15-98535, 02-07-90140) and MTC.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... 1974; Department of Homeland Security United States Coast Guard-024 Auxiliary Database System of... Security/United States Coast Guard-024 Auxiliary Database (AUXDATA) System of Records.'' This system of... titled, ``DHS/USCG-024 Auxiliary Database (AUXDATA) System of Records.'' The AUXDATA system is the USCG's...
Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M
2018-03-05
The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative analysis of heterogeneous data types in the development of complex botanicals such as polyphenols for eventual clinical and translational applications.
Development of a Tsunami Scenario Database for Marmara Sea
NASA Astrophysics Data System (ADS)
Ozer Sozdinler, Ceren; Necmioglu, Ocal; Meral Ozel, Nurcan
2016-04-01
Due to the very short travel times in Marmara Sea, a Tsunami Early Warning System (TEWS) has to be strongly coupled with the earthquake early warning system and should be supported with a pre-computed tsunami scenario database to be queried in near real-time based on the initial earthquake parameters. To address this problem, 30 different composite earthquake scenarios with maximum credible Mw values based on 32 fault segments have been identified to produce a detailed scenario database for all possible earthquakes in the Marmara Sea with a tsunamigenic potential. The bathy/topo data of Marmara Sea was prepared using GEBCO and ASTER data, bathymetric measurements along Bosphorus, Istanbul and Dardanelle, Canakkale and the coastline digitized from satellite images. The coarser domain in 90m-grid size was divided into 11 sub-regions having 30m-grid size in order to increase the data resolution and precision of the calculation results. The analyses were performed in nested domains with numerical model NAMIDANCE using non-linear shallow water equations. In order to cover all the residential areas, industrial facilities and touristic locations, more than 1000 numerical gauge points were selected along the coasts of Marmara Sea, which are located at water depth of 5 to 10m in finer domain. The distributions of tsunami hydrodynamic parameters were investigated together with the change of water surface elevations, current velocities, momentum fluxes and other important parameters at the gauge points. This work is funded by the project MARsite - New Directions in Seismic Hazard assessment through Focused Earth Observation in the Marmara Supersite (FP7-ENV.2012 6.4-2, Grant 308417 - see NH2.3/GMPV7.4/SM7.7) and supported by SATREPS-MarDim Project (Earthquake and Tsunami Disaster Mitigation in the Marmara Region and Disaster Education in Turkey) and JICA (Japan International Cooperation Agency). The authors would like to acknowledge Ms. Basak Firat for her assistance in preparation of the database.
Intrusion Detection in Database Systems
NASA Astrophysics Data System (ADS)
Javidi, Mohammad M.; Sohrabi, Mina; Rafsanjani, Marjan Kuchaki
Data represent today a valuable asset for organizations and companies and must be protected. Ensuring the security and privacy of data assets is a crucial and very difficult problem in our modern networked world. Despite the necessity of protecting information stored in database systems (DBS), existing security models are insufficient to prevent misuse, especially insider abuse by legitimate users. One mechanism to safeguard the information in these databases is to use an intrusion detection system (IDS). The purpose of Intrusion detection in database systems is to detect transactions that access data without permission. In this paper several database Intrusion detection approaches are evaluated.
NASA Astrophysics Data System (ADS)
Schleicher, David G.; Bair, Allison Nicole
2016-10-01
As remnants from the epoch of early solar system formation, comet nuclei are less processed than any other class of objects currently available for detailed study. Compositional and physical studies can therefore be used to investigate primordial conditions across the region of comet formation and/or subsequent evolutionary effects. With these goals, a long duration program of comet narrowband photometry was begun in 1976 and results for 85 comets were published by A'Hearn et al. (1995; Icarus 118, 223). Observations continued and we performed a new set of analyses of data obtained through mid-2011. Following a hiatus due to lack of funding and other competing priorities, we have now resumed our efforts at completing this project while also incorporating the most recent five years of data. The database now includes 191 comets obtained over 848 nights. A restricted subset of 116 objects were observed multiple times and are considered well-determined; these form the basis of our compositional studies. Using a variety of taxonomic techniques, we identified seven compositional classes for the data up to 2011 and anticipate no changes with the newest additions. Several classes are simply sub-groups of the original carbon-chain depleted class found by A'Hearn et al.; all evidence continues to indicate that carbon-chain depletion reflects the primordial composition at the time and location of cometary accretion and is not associated with evolution. Another new class contains five comets depleted in ammonia but not depleted in carbon-chain molecules; it is unclear if this group is primordial or not. In comparison, clear evidence for evolutionary effects are seen in the active fractions for comet nuclei -- decreasing with age -- and with the dust-to-gas ratio -- decreasing with age and perihelion distance, implying an evolution of the surface of the nucleus associated with the peak temperature attained and how often such temperatures have been reached. Updates of these and other results including data from the last five years will be presented. Support was provided by NASA Planetary Atmospheres grant NNX08AG19G.
School District Evaluation: Database Warehouse Support.
ERIC Educational Resources Information Center
Adcock, Eugene P.; Haseltine, Reginald
The Prince George's County (Maryland) school system has developed a database warehouse system as an evaluation data support tool for fulfilling the system's information demands. This paper described the Research and Evaluation Assimilation Database (READ) warehouse support system and considers the requirements for data used in evaluation and how…
A Dynamic Approach to Make CDS/ISIS Databases Interoperable over the Internet Using the OAI Protocol
ERIC Educational Resources Information Center
Jayakanth, F.; Maly, K.; Zubair, M.; Aswath, L.
2006-01-01
Purpose: A dynamic approach to making legacy databases, like CDS/ISIS, interoperable with OAI-compliant digital libraries (DLs). Design/methodology/approach: There are many bibliographic databases that are being maintained using legacy database systems. CDS/ISIS is one such legacy database system. It was designed and developed specifically for…
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
A Real-Time MODIS Vegetation Composite for Land Surface Models and Short-Term Forecasting
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; LaFontaine, Frank J.; Kumar, Sujay V.; Jedlovec, Gary J.
2011-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center is producing real-time, 1- km resolution Normalized Difference Vegetation Index (NDVI) gridded composites over a Continental U.S. domain. These composites are updated daily based on swath data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the polar orbiting NASA Aqua and Terra satellites, with a product time lag of about one day. A simple time-weighting algorithm is applied to the NDVI swath data that queries the previous 20 days of data to ensure a continuous grid of data populated at all pixels. The daily composites exhibited good continuity both spatially and temporally during June and July 2010. The composites also nicely depicted high greenness anomalies that resulted from significant rainfall over southwestern Texas, Mexico, and New Mexico during July due to early-season tropical cyclone activity. The SPoRT Center is in the process of computing greenness vegetation fraction (GVF) composites from the MODIS NDVI data at the same spatial and temporal resolution for use in the NASA Land Information System (LIS). The new daily GVF dataset would replace the monthly climatological GVF database (based on Advanced Very High Resolution Radiometer [AVHRR] observations from 1992-93) currently available to the Noah land surface model (LSM) in both LIS and the public version of the Weather Research and Forecasting (WRF) model. The much higher spatial resolution (1 km versus 0.15 degree) and daily updates based on real-time satellite observations have the capability to greatly improve the simulation of the surface energy budget in the Noah LSM within LIS and WRF. Once code is developed in LIS to incorporate the daily updated GVFs, the SPoRT Center will conduct simulation sensitivity experiments to quantify the impacts and improvements realized by the MODIS real-time GVF data. This presentation will describe the methodology used to develop the 1-km MODIS NDVI composites and show sample output from summer 2010, compare the MODIS GVF data to the AVHRR monthly climatology, and illustrate the sensitivity of the Noah LSM within LIS and/or the coupled LIS/WRF system to the new MODIS GVF dataset.
Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System
Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail
1988-01-01
This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.
U.S. Geological Survey mineral databases; MRDS and MAS/MILS
McFaul, E.J.; Mason, G.T.; Ferguson, W.B.; Lipin, B.R.
2000-01-01
These two CD-ROM's contain the latest version of the Mineral Resources Data System (MRDS) database and the Minerals Availability System/Minerals Industry Location System (MAS/MILS) database for coverage of North America and the world outside North America. The records in the MRDS database each contain almost 200 data fields describing metallic and nonmetallic mineral resources, deposits, and commodities. The records in the MAS/MILS database each contain almost 100 data fields describing mines and mineral processing plans.
TWRS technical baseline database manager definition document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acree, C.D.
1997-08-13
This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.
Contribution to the thermodynamic description of the corium - The U-Zr-O system
NASA Astrophysics Data System (ADS)
Quaini, A.; Guéneau, C.; Gossé, S.; Dupin, N.; Sundman, B.; Brackx, E.; Domenger, R.; Kurata, M.; Hodaj, F.
2018-04-01
In order to understand the stratification process that may occur in the late phase of the fuel degradation during a severe accident in a PWR, the thermodynamic knowledge of the U-Zr-O system is crucial. The presence of a miscibility gap in the U-Zr-O liquid phase may lead to a stratified configuration, which will impact the accidental scenario management. The aim of this work was to obtain new experimental data in the U-Zr-O liquid miscibility gap. New tie-line data were provided at 2567 ± 25 K. The related thermodynamic models were reassessed using present data and literature values. The reassessed model will be implemented in the TAF-ID international database. The composition and density of phases potentially formed during stratification will be predicted by coupling current thermodynamic model with thermal-hydraulics codes.
Symposium Review: Metal and Polymer Matrix Composites at MS&T 2013
NASA Astrophysics Data System (ADS)
Gupta, Nikhil; Paramsothy, Muralidharan
2014-06-01
This article reflects on the presentations made during the Metal and Polymer Matrix Composites symposium at Materials Science and Technology 2013 (MS&T'13) held in Montreal (Quebec, Canada) from October 27 to 31. The symposium had three sessions on metal matrix composites and one session on polymer matrix composites containing a total of 23 presentations. While the abstracts and full-text papers are available through databases, the discussion that took place during the symposium is often not captured in writing and gets immediately lost. We have tried to recap some of the discussion in this article and hope that it will supplement the information present in the proceedings. The strong themes in the symposium were porous composites, aluminum matrix composites, and nanocomposites. The development of processing methods was also of interest to the speakers and attendees.
The relational clinical database: a possible solution to the star wars in registry systems.
Michels, D K; Zamieroski, M
1990-12-01
In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.
Teaching Database Management System Use in a Library School Curriculum.
ERIC Educational Resources Information Center
Cooper, Michael D.
1985-01-01
Description of database management systems course being taught to students at School of Library and Information Studies, University of California, Berkeley, notes course structure, assignments, and course evaluation. Approaches to teaching concepts of three types of database systems are discussed and systems used by students in the course are…
Dunham, Annie L; Ramirez, Luis D; Vang, Choua A; Linebarger, Jared H; Landercasper, Jeffrey
2018-07-01
Patients want information to search for destination of care for breast-conserving surgery (BCS). To inform patients wanting a lumpectomy, we aimed to develop a pilot project that communicated composite quality measure (QM) results using a '4-star' rating system. Two patient-centered QMs were included in the model-reoperation rate (ROR) and cosmetic outcome (COSM). A prospective database was reviewed for stage 0-3 patients undergoing initial lumpectomy by three surgeons from 2010 to 2015. Self-reported COSM was assessed by survey. Multivariate analyses were used to test for interactions between surgeon and other variables known to influence RORs and COSMs. Models of surgeon profiling were developed that summed the ROR and COSM performance scores, then reported results using a Centers for Medicare and Medicaid Services (CMS) star-type system. Functionality for a patient to 'weight' the importance of the ratio of ROR:COSM before profiling was introduced. The unadjusted ROR for stage 1-3 patients for three surgeons was 9.5, 13.0, and 16.3%, respectively (p = 0.179) [overall rate 10.4% (38/366)]. After risk adjustment, differences between surgeons were observed for RORs, but not COSMs. Overall, patients reported excellent, good, fair, and poor COSMs of 55, 30, 11 and 4%, respectively. Composite star scores reflected differences in performance by surgeon, which could increase, or even disappear, dependent on the patient's weighting of the ROR:COSM ratio. Composite measures of performance can be developed that allow patients to input their weighted preferences and values into surgeon profiling before they consider a destination of care for BCS.
NASA Astrophysics Data System (ADS)
van Acken, D.; Luguet, A.; Pearson, D. G.; Nowell, G. M.; Fonseca, R. O. C.; Nagel, T. J.; Schulz, T.
2017-04-01
Highly siderophile element (HSE) concentration and 187Os/188Os isotopic heterogeneity has been observed on various scales in the Earth's mantle. Interaction of residual mantle peridotite with infiltrating melts has been suggested to overprint primary bulk rock HSE signatures originating from partial melting, contributing to the heterogeneity seen in the global peridotite database. Here we present a detailed study of harzburgitic xenolith 474527 from the Kangerlussuaq suite, West Greenland, coupling the Re-Os isotope geochemistry with petrography of both base metal sulfides (BMS) and silicates to assess the impact of overprint induced by melt-rock reaction on the Re-Os isotope system. Garnet harzburgite sample 474527 shows considerable heterogeneity in the composition of its major phases, most notably olivine and Cr-rich garnet, suggesting formation through multiple stages of partial melting and subsequent metasomatic events. The major BMS phases show a fairly homogeneous pentlandite-rich composition typical for BMS formed via metasomatic reaction, whereas the 187Os/188Os compositions determined for 17 of these BMS are extremely heterogeneous ranging between 0.1037 and 0.1981. Analyses by LA-ICP-MS reveal at least two populations of BMS grains characterized by contrasting HSE patterns. One type of pattern is strongly enriched in the more compatible HSE Os, Ir, and Ru over the typically incompatible Pt, Pd, and Re, while the other type shows moderate enrichment of the more incompatible HSE and has overall lower compatible HSE/incompatible HSE composition. The small-scale heterogeneity observed in these BMS highlights the need for caution when utilizing the Re-Os system to date mantle events, as even depleted harzburgite samples such as 474527 are likely to have experienced a complex history of metasomatic overprinting, with uncertain effects on the HSE.
Space Station Freedom environmental database system (FEDS) for MSFC testing
NASA Technical Reports Server (NTRS)
Story, Gail S.; Williams, Wendy; Chiu, Charles
1991-01-01
The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.
Changes in the biomass and species composition of macroalgae in a eutrophic estuary
NASA Astrophysics Data System (ADS)
Lavery, Paul S.; Lukatelich, R. J.; McComb, A. J.
1991-07-01
More than 20 years of data are presented on the macroalgal biomass, species composition and water quality of Peel-Harvey estuary in south-western Australia. The occurrence of macroalgal blooms was a sudden event in the late 1960s, and appears to have resulted from nutrient availability surpassing a threshold of some kind. Cladophora dominated the system until 1979 and appears to have had a competitive advantage in deep-water areas because of its morphology. A catastrophic event compounded by a series of unfavourable conditions resulted in the loss of Cladophora from the deep areas and its estuary-wide replacement by Chaetomorpha, which was more competitive in the shallows. Since 1979, changes in water quality have been reflected in changes in biomass and species composition in the system. Average annual biomass is linearly related to average light attenuation over the summer growth period. Periods of high nutrient concentrations favour Ulva and Enteromorpha, while Chaetomorpha resumes dominance during periods of lower mean nutrient concentrations. Nutrient concentrations appear to be more influential on an inter-annual than seasonal scale, except in the case of Ulva which, on the basis of tissue N and P concentrations, is seasonally nitrogen-limited. Light attenuation appears to have seasonal and long-term effects. The data support the hypothesis of other workers that inter-annual differences in hydrographic events and phytoplankton dynamics influence macroalgal dynamics. The concept is examined further in light of this extensive database.
US and foreign alloy cross-reference database
NASA Technical Reports Server (NTRS)
Springer, John M.; Morgan, Steven H.
1991-01-01
Marshall Space Flight Center and other NASA installations have a continuing requirement for materials data from other countries involved with the development of joint international Spacelab experiments and other hardware. This need includes collecting data for common alloys to ascertain composition, physical properties, specifications, and designations. This data is scattered throughout a large number of specification statements, standards, handbooks, and other technical literature which make a manual search both tedious and often limited in extent. In recognition of this problem, a computerized database of information on alloys was developed along with the software necessary to provide the desired functions to access this data. The intention was to produce an initial database covering aluminum alloys, along with the program to provide a user-interface to the data, and then later to extend and refine the database to include other nonferrous and ferrous alloys.
Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S
2005-06-01
The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA
De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users
NASA Astrophysics Data System (ADS)
Allaz, J. M.
2012-12-01
Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.
Microcomputer-Based Genetics Office Database System
Cutts, James H.; Mitchell, Joyce A.
1985-01-01
A database management system (Genetics Office Automation System, GOAS) has been developed for the Medical Genetics Unit of the University of Missouri. The system, which records patients' visits to the Unit's genetic and prenatal clinics, has been implemented on an IBM PC/XT microcomputer. A description of the system, the reasons for implementation, its databases, and uses are presented.
Archetype relational mapping - a practical openEHR persistence solution.
Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong
2015-11-05
One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
Wilshire, Howard G.; Bedford, David R.; Coleman, Teresa
2002-01-01
3. Plottable map representations of the database at 1:24,000 scale in PostScript and Adobe PDF formats. The plottable files consist of a color geologic map derived from the spatial database, composited with a topographic base map in the form of the USGS Digital Raster Graphic for the map area. Color symbology from each of these datasets is maintained, which can cause plot file sizes to be large.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
Sakurai, Tetsuya; Kondou, Youichi; Akiyama, Kenji; Kurotani, Atsushi; Higuchi, Mieko; Ichikawa, Takanari; Kuroda, Hirofumi; Kusano, Miyako; Mori, Masaki; Saitou, Tsutomu; Sakakibara, Hitoshi; Sugano, Shoji; Suzuki, Makoto; Takahashi, Hideki; Takahashi, Shinya; Takatsuji, Hiroshi; Yokotani, Naoki; Yoshizumi, Takeshi; Saito, Kazuki; Shinozaki, Kazuo; Oda, Kenji; Hirochika, Hirohiko; Matsui, Minami
2011-02-01
Identification of gene function is important not only for basic research but also for applied science, especially with regard to improvements in crop production. For rapid and efficient elucidation of useful traits, we developed a system named FOX hunting (Full-length cDNA Over-eXpressor gene hunting) using full-length cDNAs (fl-cDNAs). A heterologous expression approach provides a solution for the high-throughput characterization of gene functions in agricultural plant species. Since fl-cDNAs contain all the information of functional mRNAs and proteins, we introduced rice fl-cDNAs into Arabidopsis plants for systematic gain-of-function mutation. We generated >30,000 independent Arabidopsis transgenic lines expressing rice fl-cDNAs (rice FOX Arabidopsis mutant lines). These rice FOX Arabidopsis lines were screened systematically for various criteria such as morphology, photosynthesis, UV resistance, element composition, plant hormone profile, metabolite profile/fingerprinting, bacterial resistance, and heat and salt tolerance. The information obtained from these screenings was compiled into a database named 'RiceFOX'. This database contains around 18,000 records of rice FOX Arabidopsis lines and allows users to search against all the observed results, ranging from morphological to invisible traits. The number of searchable items is approximately 100; moreover, the rice FOX Arabidopsis lines can be searched by rice and Arabidopsis gene/protein identifiers, sequence similarity to the introduced rice fl-cDNA and traits. The RiceFOX database is available at http://ricefox.psc.riken.jp/.
Sakurai, Tetsuya; Kondou, Youichi; Akiyama, Kenji; Kurotani, Atsushi; Higuchi, Mieko; Ichikawa, Takanari; Kuroda, Hirofumi; Kusano, Miyako; Mori, Masaki; Saitou, Tsutomu; Sakakibara, Hitoshi; Sugano, Shoji; Suzuki, Makoto; Takahashi, Hideki; Takahashi, Shinya; Takatsuji, Hiroshi; Yokotani, Naoki; Yoshizumi, Takeshi; Saito, Kazuki; Shinozaki, Kazuo; Oda, Kenji; Hirochika, Hirohiko; Matsui, Minami
2011-01-01
Identification of gene function is important not only for basic research but also for applied science, especially with regard to improvements in crop production. For rapid and efficient elucidation of useful traits, we developed a system named FOX hunting (Full-length cDNA Over-eXpressor gene hunting) using full-length cDNAs (fl-cDNAs). A heterologous expression approach provides a solution for the high-throughput characterization of gene functions in agricultural plant species. Since fl-cDNAs contain all the information of functional mRNAs and proteins, we introduced rice fl-cDNAs into Arabidopsis plants for systematic gain-of-function mutation. We generated >30,000 independent Arabidopsis transgenic lines expressing rice fl-cDNAs (rice FOX Arabidopsis mutant lines). These rice FOX Arabidopsis lines were screened systematically for various criteria such as morphology, photosynthesis, UV resistance, element composition, plant hormone profile, metabolite profile/fingerprinting, bacterial resistance, and heat and salt tolerance. The information obtained from these screenings was compiled into a database named ‘RiceFOX’. This database contains around 18,000 records of rice FOX Arabidopsis lines and allows users to search against all the observed results, ranging from morphological to invisible traits. The number of searchable items is approximately 100; moreover, the rice FOX Arabidopsis lines can be searched by rice and Arabidopsis gene/protein identifiers, sequence similarity to the introduced rice fl-cDNA and traits. The RiceFOX database is available at http://ricefox.psc.riken.jp/. PMID:21186176
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Idea of Identification of Copper Ore with the Use of Process Analyser Technology Sensors
NASA Astrophysics Data System (ADS)
Jurdziak, Leszek; Kaszuba, Damian; Kawalec, Witold; Król, Robert
2016-10-01
The Polish resources of the copper ore exploited by the KGHM S.A. underground mines are considered as one of the most complex in the world and - consequently - the most difficult to be processed. The ore consists of three lithology forms: dolomites, shales and sandstones but in different proportions which has a significant impact on the effectiveness of the grinding and flotation processes. The lithological composition of the ore is generally recognised in-situ but after being mined it is blended on its long way from various mining fields to the processing plant by the complex transportation system consisting of belt conveyors with numerous switching points, ore bunkers and shafts. Identification of the lithological composition of the ore being supplied to the processing plant should improve the adjustments of the ore processing machinery equipment aiming to decrease the specific processing (mainly grinding) energy consumption as well as increase the metal recovery. The novel idea of Process Analyser Technology (PAT) sensors - information carrying pellets, dropped into the transported or processed bulk material which can be read directly when needed - is investigated for various applications within the DISIRE project (a part of the SPIRE initiative, acting under the Horizon2020 framework program) and here is adopted for implementing the annotation the transported copper ore for the needs of ore processing plants control. The identification of the lithological composition of ore blended on its way to the processing plant can be achieved by an information system consisting of pellets that keep the information about the original location of the portions of conveyed ore, the digital, geological database keeping the data of in-situ lithology and the simulation models of the transportation system, necessary to evaluate the composition of the blended ore. The assumptions of the proposed solution and the plan of necessary in-situ tests (with the special respect to harsh environment of
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
A digital geologic map database for the state of Oklahoma
Heran, William D.; Green, Gregory N.; Stoeser, Douglas B.
2003-01-01
This dataset is a composite of part or all of the 12 1:250,000 scale quadrangles that make up Oklahoma. The result looks like a geologic map of the State of Oklahoma. But it is only an Oklahoma shaped map clipped from the 1:250,000 geologic maps. This is not a new geologic map. No new mapping took place. The geologic information from each quadrangle is available within the composite dataset.
ERIC Educational Resources Information Center
Dumay, Xavier; Dupriez, Vincent
2007-01-01
In the scientific literature, the debate around the school and class effect is polarized, because the variance between schools and classes is explained either by school and class process or by group composition. The first aim of this article is to shed light on this debate and move beyond it by reviewing qualitative and quantitative studies…
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6).
Darwish, Omar; Li, Shuxian; May, Zane; Matthews, Benjamin; Alkharouf, Nadim W
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe- Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx.
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)
May, Zane; Matthews, Benjamin; Alkharouf, Nadim W.
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe– Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. Availability: http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx PMID:28197060
Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling
2005-01-01
Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
75 FR 72873 - Privacy Act Of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
...) is amending two existing systems of records 121VA19, ``National Patient Databases--VA'', and 136VA19E... being amended for additional databases. DATES: Comments on the amendment of these systems of records... system identified as 121VA19, ``National Patient Databases--VA,'' as set forth in the Federal Register...
Database on Performance of Neutron Irradiated FeCrAl Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Kevin G.; Briggs, Samuel A.; Littrell, Ken
The present report summarizes and discusses the database on radiation tolerance for Generation I, Generation II, and commercial FeCrAl alloys. This database has been built upon mechanical testing and microstructural characterization on selected alloys irradiated within the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) up to doses of 13.8 dpa at temperatures ranging from 200°C to 550°C. The structure and performance of these irradiated alloys were characterized using advanced microstructural characterization techniques and mechanical testing. The primary objective of developing this database is to enhance the rapid development of a mechanistic understanding on the radiation tolerancemore » of FeCrAl alloys, thereby enabling informed decisions on the optimization of composition and microstructure of FeCrAl alloys for application as an accident tolerant fuel (ATF) cladding. This report is structured to provide a brief summary of critical results related to the database on radiation tolerance of FeCrAl alloys.« less
Asynchronous data change notification between database server and accelerator controls system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, W.; Morris, J.; Nemesure, S.
2011-10-10
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less
Labay, Ben; Cohen, Adam E.; Sissel, Blake; Hendrickson, Dean A.; Martin, F. Douglas; Sarkar, Sahotra
2011-01-01
Accurate establishment of baseline conditions is critical to successful management and habitat restoration. We demonstrate the ability to robustly estimate historical fish community composition and assess the current status of the urbanized Barton Creek watershed in central Texas, U.S.A. Fish species were surveyed in 2008 and the resulting data compared to three sources of fish occurrence information: (i) historical records from a museum specimen database and literature searches; (ii) a nearly identical survey conducted 15 years earlier; and (iii) a modeled historical community constructed with species distribution models (SDMs). This holistic approach, and especially the application of SDMs, allowed us to discover that the fish community in Barton Creek was more diverse than the historical data and survey methods alone indicated. Sixteen native species with high modeled probability of occurrence within the watershed were not found in the 2008 survey, seven of these were not found in either survey or in any of the historical collection records. Our approach allowed us to more rigorously establish the true baseline for the pre-development fish fauna and then to more accurately assess trends and develop hypotheses regarding factors driving current fish community composition to better inform management decisions and future restoration efforts. Smaller, urbanized freshwater systems, like Barton Creek, typically have a relatively poor historical biodiversity inventory coupled with long histories of alteration, and thus there is a propensity for land managers and researchers to apply inaccurate baseline standards. Our methods provide a way around that limitation by using SDMs derived from larger and richer biodiversity databases of a broader geographic scope. Broadly applied, we propose that this technique has potential to overcome limitations of popular bioassessment metrics (e.g., IBI) to become a versatile and robust management tool for determining status of freshwater biotic communities. PMID:21966438
Mangericao, Tatiana C; Peng, Zhanhao; Zhang, Xuegong
2016-01-11
CRISPR has been becoming a hot topic as a powerful technique for genome editing for human and other higher organisms. The original CRISPR-Cas (Clustered Regularly Interspaced Short Palindromic Repeats coupled with CRISPR-associated proteins) is an important adaptive defence system for prokaryotes that provides resistance against invading elements such as viruses and plasmids. A CRISPR cassette contains short nucleotide sequences called spacers. These unique regions retain a history of the interactions between prokaryotes and their invaders in individual strains and ecosystems. One important ecosystem in the human body is the human gut, a rich habitat populated by a great diversity of microorganisms. Gut microbiomes are important for human physiology and health. Metagenome sequencing has been widely applied for studying the gut microbiomes. Most efforts in metagenome study has been focused on profiling taxa compositions and gene catalogues and identifying their associations with human health. Less attention has been paid to the analysis of the ecosystems of microbiomes themselves especially their CRISPR composition. We conducted a preliminary analysis of CRISPR sequences in a human gut metagenomic data set of Chinese individuals of type-2 diabetes patients and healthy controls. Applying an available CRISPR-identification algorithm, PILER-CR, we identified 3169 CRISPR cassettes in the data, from which we constructed a set of 1302 unique repeat sequences and 36,709 spacers. A more extensive analysis was made for the CRISPR repeats: these repeats were submitted to a more comprehensive clustering and classification using the web server tool CRISPRmap. All repeats were compared with known CRISPRs in the database CRISPRdb. A total of 784 repeats had matches in the database, and the remaining 518 repeats from our set are potentially novel ones. The computational analysis of CRISPR composition based contigs of metagenome sequencing data is feasible. It provides an efficient approach for finding potential novel CRISPR arrays and for analysing the ecosystem and history of human microbiomes.
Analysis and preliminary design of Kunming land use and planning management information system
NASA Astrophysics Data System (ADS)
Li, Li; Chen, Zhenjie
2007-06-01
This article analyzes Kunming land use planning and management information system from the system building objectives and system building requirements aspects, nails down the system's users, functional requirements and construction requirements. On these bases, the three-tier system architecture based on C/S and B/S is defined: the user interface layer, the business logic layer and the data services layer. According to requirements for the construction of land use planning and management information database derived from standards of the Ministry of Land and Resources and the construction program of the Golden Land Project, this paper divides system databases into planning document database, planning implementation database, working map database and system maintenance database. In the design of the system interface, this paper uses various methods and data formats for data transmission and sharing between upper and lower levels. According to the system analysis results, main modules of the system are designed as follows: planning data management, the planning and annual plan preparation and control function, day-to-day planning management, planning revision management, decision-making support, thematic inquiry statistics, planning public participation and so on; besides that, the system realization technologies are discussed from the system operation mode, development platform and other aspects.
Gratia, Audrey; Merlet, Denis; Ducruet, Violette; Lyathaud, Cédric
2015-01-01
A nuclear magnetic resonance (NMR) methodology was assessed regarding the identification and quantification of additives in three types of polylactide (PLA) intended as food contact materials. Additives were identified using the LNE/NMR database which clusters NMR datasets on more than 130 substances authorized by European Regulation No. 10/2011. Of the 12 additives spiked in the three types of PLA pellets, 10 were rapidly identified by the database and correlated with spectral comparison. The levels of the 12 additives were estimated using quantitative NMR combined with graphical computation. A comparison with chromatographic methods tended to prove the sensitivity of NMR by demonstrating an analytical difference of less than 15%. Our results therefore demonstrated the efficiency of the proposed NMR methodology for rapid assessment of the composition of PLA. Copyright © 2014 Elsevier B.V. All rights reserved.
EasyKSORD: A Platform of Keyword Search Over Relational Databases
NASA Astrophysics Data System (ADS)
Peng, Zhaohui; Li, Jing; Wang, Shan
Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.
Automating Relational Database Design for Microcomputer Users.
ERIC Educational Resources Information Center
Pu, Hao-Che
1991-01-01
Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…
ERIC Educational Resources Information Center
Moore, Pam
2010-01-01
The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App
NASA Astrophysics Data System (ADS)
Nurnawati, E. K.; Ermawati, E.
2018-02-01
An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.
CardioOp: an integrated approach to teleteaching in cardiac surgery.
Friedl, R; Preisack, M; Schefer, M; Klas, W; Tremper, J; Rose, T; Bay, J; Albers, J; Engels, P; Guilliard, P; Vahl, C F; Hannekum, A
2000-01-01
The complexity of cardiac surgery requires continuous training, education and information addressing different individuals: physicians (cardiac surgeons, residents, anaesthesiologists, cardiologists), medical students, perfusionists and patients. Efficacy and efficiency of education and training will likely be improved by the use of multimedia information systems. Nevertheless, computer-based education is facing some serious disadvantages: 1) multimedia productions require tremendous financial and time resources; 2) the obtained multimedia data are only usable for one specific target user group in one specific instructional context; 3) computer based learning programs often show deficiencies in the support of individual learning styles and in providing individual information adjusted to the learner's individual needs. In this paper we describe a computer-system, providing multiple re-use of multimedia-data in different instructional sceneries and providing flexible composition of content to different target user groups. The ZYX document model has been developed, allowing the modelling and flexible on-the-fly composition of multimedia fragments. It has been implemented as a DataBlade module into the object-relational database system Informix Dynamic Server and allows for presentation-neutral storage of multimedia content from the application domain, delivery and presentation of multimedia material, content based retrieval, re-use and composition of multimedia material for different instructional settings. Multimedia data stored in the repository, that can be processed and authored in terms of our identified needs is created by using a next generation authoring environment called CardioOP-Wizard. High-quality intra-operative video is recorded using a video-robot. Difficult surgical procedures are visualized with generic and CT-based 3D-animations. An on-line architecture for multiple re-use and flexible composition of media data has been established. The system contains the following instructional applications (prototypically implemented): a multimedia textbook on operative techniques, an interactive module for problem based-training, a module for creation and presentation of lectures and a module for patient information. Principles of cognitive psychology and knowledge management have been employed in the program. These instructional applications provide information ranging from basic knowledge at the beginner's level, procedural knowledge for the advanced level to implicit knowledge for the professional level. For media-annotation with meta-data a metainformation system, the CardioOP-Clas has been developed. The prototype focuses on aortocoronary bypass grafting and heart transplantation. The demonstrated system reflects an integrated approach in terms of information technology and teaching by means of multiple re-use and composition of stored media-items to the individual user and the chosen educational setting on different instructional levels.
Schurr, K.M.; Cox, S.E.
1994-01-01
The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krupka, K.M.; Serne, R.J.
The US Nuclear Regulatory Commission is developing a technical position document that provides guidance regarding the performance assessment of low-level radioactive waste disposal facilities. This guidance considers the effects that the chemistry of the vault disposal system may have on radionuclide release. The geochemistry of pore waters buffered by cementitious materials in the disposal system will be different from the local ground water. Therefore, the cement-buffered environment needs to be considered within the source term calculations if credit is taken for solubility limits and/or sorption of dissolved radionuclides within disposal units. A literature review was conducted on methods to modelmore » pore-water compositions resulting from reactions with cement, experimental studies of cement/water systems, natural analogue studies of cement and concrete, and radionuclide solubilities experimentally determined in cement pore waters. Based on this review, geochemical modeling was used to calculate maximum concentrations for americium, neptunium, nickel, plutonium, radium, strontium, thorium, and uranium for pore-water compositions buffered by cement and local ground-water. Another literature review was completed on radionuclide sorption behavior onto fresh cement/concrete where the pore water pH will be greater than or equal 10. Based on this review, a database was developed of preferred minimum distribution coefficient values for these radionuclides in cement/concrete environments.« less
NASA Technical Reports Server (NTRS)
2005-01-01
Topics covered include: Apparatus Characterizes Transient Voltages in Real Time; Measuring Humidity in Sealed Glass Encasements; Adaptable System for Vehicle Health and Usage Monitoring; Miniature Focusing Time-of-Flight Mass Spectrometer; Cryogenic High-Sensitivity Magnetometer; Wheel Electrometer System; Carbon-Nanotube Conductive Layers for Thin-Film Solar Cells; Patch Antenna Fed via Unequal-Crossed-Arm Aperture; LC Circuits for Diagnosing Embedded Piezoelectric Devices; Nanowire Thermoelectric Devices; Code for Analyzing and Designing Spacecraft Power System Radiators; Decision Support for Emergency Operations Centers; NASA Records Database; Real-Time Principal- Component Analysis; Fuzzy/Neural Software Estimates Costs of Rocket- Engine Tests; Multicomponent, Rare-Earth-Doped Thermal-Barrier Coatings; Reactive Additives for Phenylethynyl-Containing Resins; Improved Gear Shapes for Face Worm Gear Drives; Alternative Way of Shifting Mass to Move a Spherical Robot; Parylene C as a Sacrificial Material for Microfabrication; In Situ Electrochemical Deposition of Microscopic Wires; Improved Method of Manufacturing SiC Devices; Microwave Treatment of Prostate Cancer and Hyperplasia; Ferroelectric Devices Emit Charged Particles and Radiation; Dusty-Plasma Particle Accelerator; Frozen-Plug Technique for Liquid-Oxygen Plumbing; Shock Waves in a Bose-Einstein Condensate; Progress on a Multichannel, Dual-Mixer Stability Analyzer; Development of Carbon- Nanotube/Polymer Composites; Thermal Imaging of Earth for Accurate Pointing of Deep-Space Antennas; Modifications of a Composite-Material Combustion Chamber; Modeling and Diagnostic Software for Liquefying- Fuel Rockets; and Spacecraft Antenna Clusters for High EIRP.
NASA Astrophysics Data System (ADS)
Alhroob, M.; Battistin, M.; Berry, S.; Bitadze, A.; Bonneau, P.; Boyd, G.; Crespo-Lopez, O.; Degeorge, C.; Deterre, C.; Di Girolamo, B.; Doubek, M.; Favre, G.; Hallewell, G.; Katunin, S.; Lombard, D.; Madsen, A.; McMahon, S.; Nagai, K.; O'Rourke, A.; Pearson, B.; Robinson, D.; Rossi, C.; Rozanov, A.; Stanecka, E.; Strauss, M.; Vacek, V.; Vaglio, R.; Young, J.; Zwalinski, L.
2017-01-01
The development of custom ultrasonic instrumentation was motivated by the need for continuous real-time monitoring of possible leaks and mass flow measurement in the evaporative cooling systems of the ATLAS silicon trackers. The instruments use pairs of ultrasonic transducers transmitting sound bursts and measuring transit times in opposite directions. The gas flow rate is calculated from the difference in transit times, while the sound velocity is deduced from their average. The gas composition is then evaluated by comparison with a molar composition vs. sound velocity database, based on the direct dependence between sound velocity and component molar concentration in a gas mixture at a known temperature and pressure. The instrumentation has been developed in several geometries, with five instruments now integrated and in continuous operation within the ATLAS Detector Control System (DCS) and its finite state machine. One instrument monitors C3F8 coolant leaks into the Pixel detector N2 envelope with a molar resolution better than 2ṡ 10-5, and has indicated a level of 0.14 % when all the cooling loops of the recently re-installed Pixel detector are operational. Another instrument monitors air ingress into the C3F8 condenser of the new C3F8 thermosiphon coolant recirculator, with sub-percent precision. The recent effect of the introduction of a small quantity of N2 volume into the 9.5 m3 total volume of the thermosiphon system was clearly seen with this instrument. Custom microcontroller-based readout has been developed for the instruments, allowing readout into the ATLAS DCS via Modbus TCP/IP on Ethernet. The instrumentation has many potential applications where continuous binary gas composition is required, including in hydrocarbon and anaesthetic gas mixtures.
NASA Astrophysics Data System (ADS)
Griffin, W. L.; Fisher, N. I.; Friedman, J. H.; O'Reilly, Suzanne Y.; Ryan, C. G.
2002-12-01
Three novel statistical approaches (Cluster Analysis by Regressive Partitioning [CARP], Patient Rule Induction Method [PRIM], and ModeMap) have been used to define compositional populations within a large database (n > 13,000) of Cr-pyrope garnets from the subcontinental lithospheric mantle (SCLM). The variables used are the major oxides and proton-microprobe data for Zn, Ga, Sr, Y, and Zr. Because the rules defining these populations (classes) are expressed in simple compositional variables, they are easily applied to new samples and other databases. The classes defined by the three methods show strong similarities and correlations, suggesting that they are statistically meaningful. The geological significance of the classes has been tested by classifying garnets from 184 mantle-derived peridotite xenoliths and from a smaller database (n > 5400) of garnets analyzed for >20 trace elements by laser ablation microprobe-inductively coupled plasma-mass spectrometry (LAM-ICPMS). The relative abundances of these classes in the lithospheric mantle vary widely across different tectonic settings, and some classes are absent or very rare in either Archean or Phanerozoic SCLM. Their distribution with depth also varies widely within individual lithospheric sections and between different sections of similar tectonothermal age. These garnet classes therefore are a useful tool for mapping the geology of the SCLM. Archean SCLM sections show high degrees of depletion and varying degrees of metasomatism, and they are commonly strongly layered. Several Proterozoic SCLM sections show a concentration of more depleted material near their base, grading upward into more fertile lherzolites. The distribution of garnet classes reflecting low-T phlogopite-related metasomatism and high-T melt-related metasomatism suggests that many of these Proterozoic SCLM sections consist of strongly metasomatized Archean SCLM. The garnet-facies SCLM beneath Phanerozoic terrains is only mildly depleted relative to Primitive Upper Mantle (PUM) compositions. These data emphasize the secular evolution of SCLM composition defined earlier [Griffin et al., 1998, 1999a] and suggest that at least part of this evolutionary trend reflects reworking and refertilization of SCLM formed in the Archean time.
Kumari, Sangita; Pundhir, Sachin; Priya, Piyush; Jeena, Ganga; Punetha, Ankita; Chawla, Konika; Firdos Jafaree, Zohra; Mondal, Subhasish; Yadav, Gitanjali
2014-01-01
Plant essential oils are complex mixtures of volatile organic compounds, which play indispensable roles in the environment, for the plant itself, as well as for humans. The potential biological information stored in essential oil composition data can provide an insight into the silent language of plants, and the roles of these chemical emissions in defense, communication and pollinator attraction. In order to decipher volatile profile patterns from a global perspective, we have developed the ESSential OIL DataBase (EssOilDB), a continually updated, freely available electronic database designed to provide knowledge resource for plant essential oils, that enables one to address a multitude of queries on volatile profiles of native, invasive, normal or stressed plants, across taxonomic clades, geographical locations and several other biotic and abiotic influences. To our knowledge, EssOilDB is the only database in the public domain providing an opportunity for context based scientific research on volatile patterns in plants. EssOilDB presently contains 123 041 essential oil records spanning a century of published reports on volatile profiles, with data from 92 plant taxonomic families, spread across diverse geographical locations all over the globe. We hope that this huge repository of VOCs will facilitate unraveling of the true significance of volatiles in plants, along with creating potential avenues for industrial applications of essential oils. We also illustrate the use of this database in terpene biology and show how EssOilDB can be used to complement data from computational genomics to gain insights into the diversity and variability of terpenoids in the plant kingdom. EssOilDB would serve as a valuable information resource, for students and researchers in plant biology, in the design and discovery of new odor profiles, as well as for entrepreneurs—the potential for generating consumer specific scents being one of the most attractive and interesting topics in the cosmetic industry. Database URL: http://nipgr.res.in/Essoildb/ PMID:25534749
Simple re-instantiation of small databases using cloud computing.
Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M
2013-01-01
Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.
Simple re-instantiation of small databases using cloud computing
2013-01-01
Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380
NASA Astrophysics Data System (ADS)
Khan, A.; Shankland, T. J.
2012-02-01
This paper applies electromagnetic sounding methods for Earth's mantle to constrain its thermal state, chemical composition, and "water" content. We consider long-period inductive response functions in the form of C-responses from four stations distributed across the Earth (Europe, North America, Asia and Australia) covering a period range from 3.9 to 95.2 days and sensitivity to ~ 1200 km depth. We invert C-responses directly for thermo-chemical state using a self-consistent thermodynamic method that computes phase equilibria as functions of pressure, temperature, and composition (in the Na2O-CaO-FeO-MgO-Al2O3-SiO2 model system). Computed mineral modes are combined with recent laboratory-based electrical conductivity models from independent experimental research groups (Yoshino (2010) and Karato (2011)) to compute bulk conductivity structure beneath each of the four stations from which C-responses are estimated. To reliably allocate water between the various mineral phases we include laboratory-measured water partition coefficients for major upper mantle and transition zone minerals. This scheme is interfaced with a sampling-based algorithm to solve the resulting non-linear inverse problem. This approach has two advantages: (1) It anchors temperatures, composition, electrical conductivities, and discontinuities that are in laboratory-based forward models, and (2) At the same time it permits the use of geophysical inverse methods to optimize conductivity profiles to match geophysical data. The results show lateral variations in upper mantle temperatures beneath the four stations that appear to persist throughout the upper mantle and parts of the transition zone. Calculated mantle temperatures at 410 and 660 km depth lie in the range 1250-1650 °C and 1500-1750 °C, respectively, and generally agree with the experimentally-determined temperatures at which the measured phase reactions olivine → β-spinel and γ-spinel → ferropericlase + perovskite occur. The retrieved conductivity structures beneath the various stations tend to follow trends observed for temperature with the strongest lateral variations in the uppermost mantle; for depths > 300 km conductivities appear to depend less on the particular conductivity database. Conductivities at 410 km and at 660 km depth are found to agree overall with purely geophysically-derived global and semi-global one-dimensional conductivity models. Both electrical conductivity databases point to < 0.01 wt.% H2O in the upper mantle. For transition zone minerals results from the laboratory database of Yoshino (2010) suggest that a much higher water content (up to 2 wt.% H2O) is required than in the other database (Karato, 2011), which favors a relatively "dry" transition zone (< 0.01 wt.% H2O). Incorporating laboratory measurements of hydrous silicate melting relations and available conductivity data allows us to consider the possibility of hydration melting and a high-conductivity melt layer above the 410-km discontinuity. The latter appears to be 1) regionally localized and 2) principally a feature from the Yoshino (2010) database. Further, there is evidence of lateral heterogeneity: The mantle beneath southwestern North America and central China appears "wetter" than that beneath central Europe or Australia.
High-precision isotopic characterization of USGS reference materials by TIMS and MC-ICP-MS
NASA Astrophysics Data System (ADS)
Weis, Dominique; Kieffer, Bruno; Maerschalk, Claude; Barling, Jane; de Jong, Jeroen; Williams, Gwen A.; Hanano, Diane; Pretorius, Wilma; Mattielli, Nadine; Scoates, James S.; Goolaerts, Arnaud; Friedman, Richard M.; Mahoney, J. Brian
2006-08-01
The Pacific Centre for Isotopic and Geochemical Research (PCIGR) at the University of British Columbia has undertaken a systematic analysis of the isotopic (Sr, Nd, and Pb) compositions and concentrations of a broad compositional range of U.S. Geological Survey (USGS) reference materials, including basalt (BCR-1, 2; BHVO-1, 2), andesite (AGV-1, 2), rhyolite (RGM-1, 2), syenite (STM-1, 2), granodiorite (GSP-2), and granite (G-2, 3). USGS rock reference materials are geochemically well characterized, but there is neither a systematic methodology nor a database for radiogenic isotopic compositions, even for the widely used BCR-1. This investigation represents the first comprehensive, systematic analysis of the isotopic composition and concentration of USGS reference materials and provides an important database for the isotopic community. In addition, the range of equipment at the PCIGR, including a Nu Instruments Plasma MC-ICP-MS, a Thermo Finnigan Triton TIMS, and a Thermo Finnigan Element2 HR-ICP-MS, permits an assessment and comparison of the precision and accuracy of isotopic analyses determined by both the TIMS and MC-ICP-MS methods (e.g., Nd isotopic compositions). For each of the reference materials, 5 to 10 complete replicate analyses provide coherent isotopic results, all with external precision below 30 ppm (2 SD) for Sr and Nd isotopic compositions (27 and 24 ppm for TIMS and MC-ICP-MS, respectively). Our results also show that the first- and second-generation USGS reference materials have homogeneous Sr and Nd isotopic compositions. Nd isotopic compositions by MC-ICP-MS and TIMS agree to within 15 ppm for all reference materials. Interlaboratory MC-ICP-MS comparisons show excellent agreement for Pb isotopic compositions; however, the reproducibility is not as good as for Sr and Nd. A careful, sequential leaching experiment of three first- and second-generation reference materials (BCR, BHVO, AGV) indicates that the heterogeneity in Pb isotopic compositions, and concentrations, could be directly related to contamination by the steel (mortar/pestle) used to process the materials. Contamination also accounts for the high concentrations of certain other trace elements (e.g., Li, Mo, Cd, Sn, Sb, W) in various USGS reference materials.
Calculation of the relative metastabilities of proteins using the CHNOSZ software package
Dick, Jeffrey M
2008-01-01
Background Proteins of various compositions are required by organisms inhabiting different environments. The energetic demands for protein formation are a function of the compositions of proteins as well as geochemical variables including temperature, pressure, oxygen fugacity and pH. The purpose of this study was to explore the dependence of metastable equilibrium states of protein systems on changes in the geochemical variables. Results A software package called CHNOSZ implementing the revised Helgeson-Kirkham-Flowers (HKF) equations of state and group additivity for ionized unfolded aqueous proteins was developed. The program can be used to calculate standard molal Gibbs energies and other thermodynamic properties of reactions and to make chemical speciation and predominance diagrams that represent the metastable equilibrium distributions of proteins. The approach takes account of the chemical affinities of reactions in open systems characterized by the chemical potentials of basis species. The thermodynamic database included with the package permits application of the software to mineral and other inorganic systems as well as systems of proteins or other biomolecules. Conclusion Metastable equilibrium activity diagrams were generated for model cell-surface proteins from archaea and bacteria adapted to growth in environments that differ in temperature and chemical conditions. The predicted metastable equilibrium distributions of the proteins can be compared with the optimal growth temperatures of the organisms and with geochemical variables. The results suggest that a thermodynamic assessment of protein metastability may be useful for integrating bio- and geochemical observations. PMID:18834534
EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model
EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c
Evaluation of Graphite Fiber/Polyimide PMCs from Hot Melt vs Solution Prepreg
NASA Technical Reports Server (NTRS)
Shin, E. Eugene; Sutter, James K.; Eakin, Howard; Inghram, Linda; McCorkle, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Thesken, John; Fink, Jeffrey E.
2002-01-01
Carbon fiber reinforced high temperature polymer matrix composites (PMC) have been extensively investigated as potential weight reduction replacements of various metallic components in next generation high performance propulsion rocket engines. The initial phase involves development of comprehensive composite material-process-structure-design-property-in-service performance correlations and database, especially for a high stiffness facesheet of various sandwich structures. Overview of the program plan, technical approaches and current multi-team efforts will be presented. During composite fabrication, it was found that the two large volume commercial prepregging methods (hot-melt vs. solution) resulted in considerably different composite cure behavior. Details of the process-induced physical and chemical modifications in the prepregs, their effects on composite processing, and systematic cure cycle optimization studies will be discussed. The combined effects of prepregging method and cure cycle modification on composite properties and isothermal aging performance were also evaluated.
Evaluation of Graphite Fiber/Polyimide PMCs from Hot Melt versus Solution Prepreg
NASA Technical Reports Server (NTRS)
Shin, Eugene E.; Sutter, James K.; Eakin, Howard; Inghram, Linda; McCorkle, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Thesken, John; Fink, Jeffrey E.; Gray, Hugh R. (Technical Monitor)
2002-01-01
Carbon fiber reinforced high temperature polymer matrix composites (PMC) have been extensively investigated as potential weight reduction replacements of various metallic components in next generation high performance propulsion rocket engines. The initial phase involves development of comprehensive composite material-process-structure-design-property in-service performance correlations and database, especially for a high stiffness facesheet of various sandwich structures. Overview of the program plan, technical approaches and current multi-team efforts will be presented. During composite fabrication, it was found that the two large volume commercial prepregging methods (hot-melt vs. solution) resulted in considerably different composite cure behavior. Details of the process-induced physical and chemical modifications in the prepregs, their effects on composite processing, and systematic cure cycle optimization studies will be discussed. The combined effects of prepregging method and cure cycle modification on composite properties and isothermal aging performance were also evaluated.
Phase Equilibria Diagrams Database
National Institute of Standards and Technology Data Gateway
SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase) The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.
Full value documentation in the Czech Food Composition Database.
Machackova, M; Holasova, M; Maskova, E
2010-11-01
The aim of this project was to launch a new Food Composition Database (FCDB) Programme in the Czech Republic; to implement a methodology for food description and value documentation according to the standards designed by the European Food Information Resource (EuroFIR) Network of Excellence; and to start the compilation of a pilot FCDB. Foods for the initial data set were selected from the list of foods included in the Czech Food Consumption Basket. Selection of 24 priority components was based on the range of components used in former Czech tables. The priority list was extended with components for which original Czech analytical data or calculated data were available. Values that were input into the compiled database were documented according to the EuroFIR standards within the entities FOOD, COMPONENT, VALUE and REFERENCE using Excel sheets. Foods were described using the LanguaL Thesaurus. A template for documentation of data according to the EuroFIR standards was designed. The initial data set comprised documented data for 162 foods. Values were based on original Czech analytical data (available for traditional and fast foods, milk and milk products, wheat flour types), data derived from literature (for example, fruits, vegetables, nuts, legumes, eggs) and calculated data. The Czech FCDB programme has been successfully relaunched. Inclusion of the Czech data set into the EuroFIR eSearch facility confirmed compliance of the database format with the EuroFIR standards. Excel spreadsheets are applicable for full value documentation in the FCDB.
Spotting L3 slice in CT scans using deep convolutional network and transfer learning.
Belharbi, Soufiane; Chatelain, Clément; Hérault, Romain; Adam, Sébastien; Thureau, Sébastien; Chastan, Mathieu; Modzelewski, Romain
2017-08-01
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Composition of ready cooked foods sampled in southern Thailand.
Kajadphai-Taungbodhitham, Anocha
2007-01-01
This study investigated the nutrient composition of ready cooked foods commonly consumed in southern Thailand. Four samples of fourteen types; eight curry dishes, one sweet and sour curry, a soup dish, one stir-fried curry, one stir-fried dish and two single plate dishes were each purchased from 4 different shops around Hat Yai district. The edible part was blended and analysed for its nutrients content per 100 g edible portion. Cassia curry, Thai noodle salad, Ark shell curry and Fermented fish gut dish were a good source of vitamin B1 (145 microg), vitamin C (2.20 mg), calcium (0.23 g) and iron (6.07 mg), respectively. Moisture, ash, fat, protein and carbohydrate were high in Mungbean noodle soup (92.6 g), Fermented fish gut dish (4.1 g), Cassia curry (9.9 g), Stingray stir-fried curry (16.7 g) and Thai noodle salad (24.2 g). Results also showed that the main ingredients and cooking process determined the nutritional values of the foods. A new set of 4 samples of Round noodle in southern curry was purchased, each separated into its edible components and nutrient values estimated using the Thai single ingredient databases. Their nutrient content was also calculated using the data of similar food obtained from this study. Considerable differences amongst the values from the 2 sets of calculation were observed. Problems inherent in using the single ingredient databases were highlighted. This work demonstrates a need to create a food composition database of whole cooked meals ready for serving that reflects real life consumption.
Landwehr, Jurate M.; Coplen, Tyler B.; Stewart, David W.
2013-01-01
To assess spatial, seasonal, and source variability in stable isotopic composition of human drinking waters throughout the entire USA, we have constructed a database of δ18O and δ2H of US tap waters. An additional purpose was to create a publicly available dataset useful for evaluating the forensic applicability of these isotopes for human tissue source geolocation. Samples were obtained at 349 sites, from diverse population centres, grouped by surface hydrologic units for regional comparisons. Samples were taken concurrently during two contrasting seasons, summer and winter. Source supply (surface, groundwater, mixed, and cistern) and system (public and private) types were noted. The isotopic composition of tap waters exhibits large spatial and regional variation within each season as well as significant at-site differences between seasons at many locations, consistent with patterns found in environmental (river and precipitation) waters deriving from hydrologic processes influenced by geographic factors. However, anthropogenic factors, such as the population of a tap’s surrounding community and local availability from diverse sources, also influence the isotopic composition of tap waters. Even within a locale as small as a single metropolitan area, tap waters with greatly differing isotopic compositions can be found, so that tap water within a region may not exhibit the spatial or temporal coherence predicted for environmental water. Such heterogeneities can be confounding factors when attempting forensic inference of source water location, and they underscore the necessity of measurements, not just predictions, with which to characterize the isotopic composition of regional tap waters. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Dynamic composition of medical support services in the ICU: Platform and algorithm design details.
Hristoskova, Anna; Moeyersoon, Dieter; Van Hoecke, Sofie; Verstichel, Stijn; Decruyenaere, Johan; De Turck, Filip
2010-12-01
The Intensive Care Unit (ICU) is an extremely data-intensive environment where each patient needs to be monitored 24/7. Bedside monitors continuously register vital patient values (such as serum creatinine, systolic blood pressure) which are recorded frequently in the hospital database (e.g. every 2 min in the ICU of the Ghent University Hospital), laboratories generate hundreds of results of blood and urine samples, and nurses measure blood pressure and temperature up to 4 times an hour. The processing of such large amount of data requires an automated system to support the physicians' daily work. The Intensive Care Service Platform (ICSP) offers the needed support through the development of medical support services for processing and monitoring patients' data. With an increased deployment of these medical support services, reusing existing services as building blocks to create new services offers flexibility to the developer and accelerates the design process. This paper presents a new addition to the ICSP, the Dynamic Composer for Web services. Based on a semantic description of the medical support services, this Composer enables a service to be executed by creating a composition of medical services that provide the needed calculations. The composition is achieved using various algorithms satisfying certain quality of service (QoS) constraints and requirements. In addition to the automatic composition the paper also proposes a recovery mechanism in case of unavailable services. When executing the composition of medical services, unavailable services are dynamically replaced by equivalent services or a new composition achieving the same result. The presented platform and QoS algorithms are put through extensive performance and scalability tests for typical ICU scenarios, in which basic medical services are composed to a complex patient monitoring service. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Klemperer, Katharina; And Others
1989-01-01
Each of three articles describes an academic library's online catalog that includes locally created databases. Topics covered include database and software selection; systems design and development; database producer negotiations; problems encountered during implementation; database loading; training and documentation; and future plans. (CLB)
A storage scheme for the real-time database supporting the on-line commitment
NASA Astrophysics Data System (ADS)
Dai, Hong-bin; Jing, Yu-jian; Wang, Hui
2013-07-01
The modern SCADA (Supervisory Control and Data acquisition) systems have been applied to various aspects of everyday life. As the time goes on, the requirements of the applications of the systems vary. Thus the data structure of the real-time database, which is the core of a SCADA system, often needs modification. As a result, the commitment consisting of a sequence of configuration operations modifying the data structure of the real-time database is performed from time to time. Though it is simple to perform the off-line commitment by first stopping and then restarting the system, during which all the data in the real-time database are reconstructed. It is much more preferred or in some cases even necessary to perform the on-line commitment, during which the real-time database can still provide real-time service and the system continues working normally. In this paper, a storage scheme of the data in the real-time database is proposed. It helps the real-time database support its on-line commitment, during which real-time service is still available.
ERIC Educational Resources Information Center
Freeman, Carla; And Others
In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
Shah, Rahul K; Stey, Anne M; Jatana, Kris R; Rangel, Shawn J; Boss, Emily F
2014-11-01
Despite increased emphasis on measuring safety outcomes and quality indicators for surgical care, little is known regarding which operative procedures should be prioritized for quality-improvement initiatives in pediatric otolaryngology. To describe the 30-day adverse event rates and relative contributions to morbidity for procedures in pediatric otolaryngology surgery using data from the American College of Surgeons' National Surgical Quality Improvement Program Pediatric database (ACS-NSQIP-P). Analysis of records contained in the ACS-NSQIP-P 2011-2012 clinical database. The ACS-NSQIP-P is a nationwide risk-adjusted, clinical outcomes-based program aimed at measuring and improving pediatric surgical care. Fifty hospitals participated in the 2011-2012 ACS-NSQIP-P program. Medical records of patients who underwent tracked otolaryngologic procedures were accrued in the ACS-NSQIP-P database. These were inclusive of specific otolaryngologic surgical procedures and do not represent the entire spectrum of pediatric otolaryngology surgical procedures. Individual 30-day adverse events, composite morbidity, composite serious adverse events, and composite hospital-acquired infections were compiled. Clinically related procedure groups were used to broadly evaluate outcomes. Procedures and groups were evaluated according to their relative contribution to otolaryngologic morbidity and their incidence of major complications. A total of 8361 patients underwent 1 of 40 selected otolaryngology procedures; 90% were elective; 76% were performed on an outpatient or ambulatory basis; and 46% were American Society of Anesthesiologists (ASA) class 2 cases. Individual 30-day adverse event rates were highest for return to the operating room (4%), surgical site infection (2%), pneumonia (1%), sepsis (1%), and reintubation (1%). The highest rates of composite morbidity were seen for tracheostomy in patients younger than 2 years (23%), airway reconstruction (19%), and tympanoplasty with mastoidectomy (2%). Airway reconstruction procedures had the highest rates of composite serious adverse events (16%), followed by tracheostomy (13%) and abscess drainage (5%). Tracheostomy (31%) and airway reconstruction (16%) made the largest relative contributions to composite morbidity rate of the procedures studied. Tracheostomy in patients younger than 2 years had the highest composite hospital-acquired infection rate (14%), followed by airway reconstruction procedures (11%) and tympanoplasty with mastoidectomy (2%). While the overall rate of major postoperative morbidity in pediatric otolaryngology is low, areas for targeted quality-improvement interventions include tracheostomy, airway reconstruction, mastoidectomy, and abscess drainage. Measurement of outcomes specific to otolaryngologic procedures will be necessary to further identify and measure the impact of quality-improvement initiatives in pediatric otolaryngology.
Manja, Veena; AlBashir, Siwar; Guyatt, Gordon
2017-02-01
Composite end points are frequently used in reports of clinical trials. One rationale for the use of composite end points is to account for competing risks. In the presence of competing risks, the event rate of a specific event depends on the rates of other competing events. One proposed solution is to include all important competing events in one composite end point. Clinical trialists require guidance regarding when this approach is appropriate. To identify publications describing criteria for use of composite end points for competing risk and to offer guidance regarding when a composite end point is appropriate on the basis of competing risks. We searched MEDLINE, CINAHL, EMBASE, The Cochrane's Central & Systematic Review databases including the Health Technology Assessment database, and the Cochrane's Methodology register from inception to April 2015, and candidate textbooks, to identify all articles providing guidance on this issue. Eligible publications explicitly addressed the issue of a composite outcome to address competing risks. Two reviewers independently screened the titles and abstracts for full-text review; independently reviewed full-text publications; and abstracted specific criteria authors offered for use of composite end points to address competing risks. Of 63,645 titles and abstracts, 166 proved potentially relevant of which 43 publications were included in the final review. Most publications note competing risks as a reason for using composite end points without further elaboration. None of the articles or textbook chapters provide specific criteria for use of composite end points for competing risk. Some advocate using composite end points to avoid bias due to competing risks and others suggest that composite end points seldom or never be used for this purpose. We recommend using composite end points for competing risks only if the competing risk is plausible and if it occurs with sufficiently high frequency to influence the interpretation of the effect of intervention on the end point of interest. These criteria will seldom be met. Review of heart failure trials published in the New England Journal of Medicine revealed that many of them use the composite end point of death or hospitalization; none of the trials, however, satisfied our criteria. The existing literature fails to provide clear guidance regarding use of composite end point for competing risks. We recommend using composite end points for competing risks only if the competing risk is plausible and if it occurs sufficiently often. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Sloma, Tanya Noel
When representing the behavior of commercial spent nuclear fuel (SNF), credit is sought for the reduced reactivity associated with the net depletion of fissile isotopes and the creation of neutron-absorbing isotopes, a process that begins when a commercial nuclear reactor is first operated at power. Burnup credit accounts for the reduced reactivity potential of a fuel assembly and varies with the fuel burnup, cooling time, and the initial enrichment of fissile material in the fuel. With regard to long-term SNF disposal and transportation, tremendous benefits, such as increased capacity, flexibility of design and system operations, and reduced overall costs, provide an incentive to seek burnup credit for criticality safety evaluations. The Nuclear Regulatory Commission issued Interim Staff Guidance 8, Revision 2 in 2002, endorsing burnup credit of actinide composition changes only; credit due to actinides encompasses approximately 30% of exiting pressurized water reactor SNF inventory and could potentially be increased to 90% if fission product credit were accepted. However, one significant issue for utilizing full burnup credit, compensating for actinide and fission product composition changes, is establishing a set of depletion parameters that produce an adequately conservative representation of the fuel's isotopic inventory. Depletion parameters can have a significant effect on the isotopic inventory of the fuel, and thus the residual reactivity. This research seeks to quantify the reactivity impact on a system from dominant depletion parameters (i.e., fuel temperature, moderator density, burnable poison rod, burnable poison rod history, and soluble boron concentration). Bounding depletion parameters were developed by statistical evaluation of a database containing reactor operating histories. The database was generated from summary reports of commercial reactor criticality data. Through depletion calculations, utilizing the SCALE 6 code package, several light water reactor assembly designs and in-core locations are analyzed in establishing a combination of depletion parameters that conservatively represent the fuel's isotopic inventory as an initiative to take credit for fuel burnup in criticality safety evaluations for transportation and storage of SNF.
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
NASA Astrophysics Data System (ADS)
Neubauer, Thomas A.; Harzhauser, Mathias; Mandic, Oleg; Kroh, Andreas
2013-04-01
Globally, about 4000 extant species of freshwater gastropod species have been described. In contrast, only 225 species are listed by MollBase2012 for North- and Central Europe. Many of these are rare species, limited to certain springs and in fact the typical diversity of gastropods in lakes of North and Central Europe is much lower. The high number is boosted by several highly speciose endemic radiations in long-lived ancient lakes, which are hotspots for biodiversity. These long-lived ancient lakes provide key examples for understanding evolutionary processes and therefore are intensively studied. During the Neogene, Europe's geodynamic history gave rise to several such long-lived lakes with conspicuous endemic radiations. However, these lacustrine systems are rare today as well as in the past compared to the enormous numbers of "normal" lakes. Most extant European lakes are mainly results of the Ice Ages and are due to their geologically temporary nature largely confined to the Pleistocene-Holocene. Also deposits of streams, springs, and groundwater, which today are inhabited by species-rich gastropod assemblages, are rarely preserved. Thus, the pre-Quaternary lacustrine record is biased towards long-lived systems. Apart from few general overviews precise studies on the γ-diversities of the post-Oligocene European lake systems and the shifting biodiversity in European freshwater systems through space and time are entirely missing. Even for the modern faunas, literature on large-scale freshwater gastropod diversity in extant lakes is scarce and lacks a statistical approach. Building upon a great amount of existing literature, a new project will provide the first detailed assessment of the composition of European freshwater gastropods during the Neogene and Quaternary at species, genus and family levels, with emphasis on lake faunas. The γ-diversity of several hundred modern and fossil European lakes will be evaluated. Data will be made available permanently for the public via the FreshGEN-database (Freshwater Gastropods of the European Neogene). The most important topics to be tackled based on the data are to search for factors, which explain the γ-diversities through time and to look for geographic gradients in species richness and/or faunal composition. Diversity data and inter-lake comparison will allow estimating endemism rates and quantitatively defining biodiversity hotspots in present and past lakes. Shells-sizes of all taxa will be evaluated to search for general patterns and to define phases of conspicuous "gigantism". The well resolved climate history of Europe during the last 23 million years will be a frame for linking species- and supraspecific compositions with climatic trends and events. Ideally, the project will shed light on the origin of modern lake faunas by the intense cooperation between zoologists and paleontologists. A major aim is to map and define a statistics-based Pan-European biogeography and palaeobiogeography of Neogene to Quaternary freshwater systems. Once established, this database will be open for geographic and/or stratigraphic expansion.
Computer Security Products Technology Overview
1988-10-01
13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
Performance analysis of different database in new internet mapping system
NASA Astrophysics Data System (ADS)
Yao, Xing; Su, Wei; Gao, Shuai
2017-03-01
In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Su-Jong; Rabiti, Cristian; Sackett, John
2014-08-01
1. Objectives To produce a validation database out of those recorded signals it will be necessary also to identify the documents need to reconstruct the status of reactor at the time of the beginning of the recordings. This should comprehends the core loading specification (assemblies type and location and burn-up) along with this data the assemblies drawings and the core drawings will be identified. The first task of the project will be identify the location of the sensors, with respect the reactor plant layout, and the physical quantities recorded by the Experimental Breeder Reactor-II (EBR-II) data acquisition system. This firstmore » task will allow guiding and prioritizing the selection of drawings needed to numerically reproduce those signals. 1.1 Scopes and Deliverables The deliverables of this project are the list of sensors in EBR-II system, the identification of storing location of those sensors, identification of a core isotopic composition at the moment of the start of system recording. Information of the sensors in EBR-II reactor system was summarized from the EBR-II system design descriptions listed in Section 1.2.« less
A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations
NASA Astrophysics Data System (ADS)
Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.
2015-12-01
To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list
Forsell, M; Häggström, M; Johansson, O; Sjögren, P
2008-11-08
To develop a personal digital assistant (PDA) application for oral health assessment fieldwork, including back-office and database systems (MobilDent). System design, construction and implementation of PDA, back-office and database systems. System requirements for MobilDent were collected, analysed and translated into system functions. User interfaces were implemented and system architecture was outlined. MobilDent was based on a platform with. NET (Microsoft) components, using an SQL Server 2005 (Microsoft) for data storage with Windows Mobile (Microsoft) operating system. The PDA devices were Dell Axim. System functions and user interfaces were specified for MobilDent. User interfaces for PDA, back-office and database systems were based on. NET programming. The PDA user interface was based on Windows suitable to a PDA display, whereas the back-office interface was designed for a normal-sized computer screen. A synchronisation module (MS Active Sync, Microsoft) was used to enable download of field data from PDA to the database. MobilDent is a feasible application for oral health assessment fieldwork, and the oral health assessment database may prove a valuable source for care planning, educational and research purposes. Further development of the MobilDent system will include wireless connectivity with download-on-demand technology.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge
2017-05-04
Thermophysical properties of a carbon-reinforced epoxy composite laminate (T700/M21 composite for aircraft structures) were evaluated using different innovative characterisation methods. Thermogravimetric Analysis (TGA), Simultaneous Thermal analysis (STA), Laser Flash analysis (LFA), and Fourier Transform Infrared (FTIR) analysis were used for measuring the thermal decomposition, the specific heat capacity, the anisotropic thermal conductivity of the composite, the heats of decomposition and the specific heat capacity of released gases. It permits to get input data to feed a three-dimensional (3D) model given the temperature profile and the mass loss obtained during well-defined fire scenarios (model presented in Part II of this paper). The measurements were optimised to get accurate data. The data also permit to create a public database on an aeronautical carbon fibre/epoxy composite for fire safety engineering.
NASA Technical Reports Server (NTRS)
Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Wentworth, Susan; McKay, Dave S.; Botha, Pieter; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul
2008-01-01
This slide presentation reviews the work to analyze the lunar highland regolith samples that came from the Apollo 16 core sample 64001/2 and simulants of lunar regolith, and build a comparative database. The work is part of a larger effort to compile an internally consistent database on lunar regolith (Apollo Samples) and lunar regolith simulants. This is in support of a future lunar outpost. The work is to characterize existing lunar regolith and simulants in terms of particle type, particle size distribution, particle shape distribution, bulk density, and other compositional characteristics, and to evaluate the regolith simulants by the same properties in comparison to the Apollo sample lunar regolith.
Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2013-01-01
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
A Support Database System for Integrated System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John
2007-01-01
The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.
Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...
2013-01-01
This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less
Efficiency of polymerization of bulk-fill composite resins: a systematic review.
Reis, André Figueiredo; Vestphal, Mariana; Amaral, Roberto Cesar do; Rodrigues, José Augusto; Roulet, Jean-François; Roscoe, Marina Guimarães
2017-08-28
This systematic review assessed the literature to evaluate the efficiency of polymerization of bulk-fill composite resins at 4 mm restoration depth. PubMed, Cochrane, Scopus and Web of Science databases were searched with no restrictions on year, publication status, or article's language. Selection criteria included studies that evaluated bulk-fill composite resin when inserted in a minimum thickness of 4 mm, followed by curing according to the manufacturers' instructions; presented sound statistical data; and comparison with a control group and/or a reference measurement of quality of polymerization. The evidence level was evaluated by qualitative scoring system and classified as high-, moderate- and low- evidence level. A total of 534 articles were retrieved in the initial search. After the review process, only 10 full-text articles met the inclusion criteria. Most articles included (80%) were classified as high evidence level. Among several techniques, microhardness was the most frequently method performed by the studies included in this systematic review. Irrespective to the "in vitro" method performed, bulk fill RBCs were partially likely to fulfill the important requirement regarding properly curing in 4 mm of cavity depth measured by depth of cure and / or degree of conversion. In general, low viscosities BFCs performed better regarding polymerization efficiency compared to the high viscosities BFCs.
Clinical effects of probiotics in cystic fibrosis patients: A systematic review.
Van Biervliet, Stephanie; Declercq, Dimitri; Somerset, Shawn
2017-04-01
Cystic fibrosis (CF) is characterised by a build-up of thick, intransient mucus linings of the digestive and respiratory mucosa, which disrupts digestive system functioning and microbiota composition. In view of the potential for probiotics to enhance microbiota composition in other contexts, this study investigated the current evidence for probiotics as an adjunct to usual therapy for CF. Electronic clinical databases were interrogated for human randomised, controlled, intervention trials (1985-2015) testing the effects of probiotics on clinical endpoints in CF were reviewed. From 191 articles identified in initial searches, six studies met the critical inclusion criteria, and were reviewed in detail. These studies varied in size (n = 22 to 61) but were generally small and showed substantial diversity in protocol, specific probiotic species used and range of clinical outcomes measured. Probiotic administration showed beneficial effects on fecal calprotectin levels, pulmonary exacerbation risk, and quality of life indicators. In one study, such changes were associated with variations in gut microbiota composition. Despite encouraging preliminary results, the limited number of small and highly varied studies to date do not justify the addition of probiotics as an adjunct to current CF treatment protocols. Importantly, very minimal adverse effects of probiotics have been reported. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Duren, Jeroen K; Koch, Carl; Luo, Alan
The primary limitation of today’s lightweight structural alloys is that specific yield strengths (SYS) higher than 200MPa x cc/g (typical value for titanium alloys) are extremely difficult to achieve. This holds true especially at a cost lower than 5dollars/kg (typical value for magnesium alloys). Recently, high-entropy alloys (HEA) have shown promising SYS, yet the large composition space of HEA makes screening compositions complex and time-consuming. Over the course of this 2-year project we started from 150 billion compositions and reduced the number of potential low-density (<5g/cc), low-cost (<5dollars/kg) high-entropy alloy (LDHEA) candidates that are single-phase, disordered, solid-solution (SPSS) to amore » few thousand compositions. This was accomplished by means of machine learning to guide design for SPSS LDHEA based on a combination of recursive partitioning, an extensive, experimental HEA database compiled from 24 literature sources, and 91 calculated parameters serving as phenomenological selection rules. Machine learning shows an accuracy of 82% in identifying which compositions of a separate, smaller, experimental HEA database are SPSS HEA. Calculation of Phase Diagrams (CALPHAD) shows an accuracy of 71-77% for the alloys supported by the CALPHAD database, where 30% of the compiled HEA database is not supported by CALPHAD. In addition to machine learning, and CALPHAD, a third tool was developed to aid design of SPSS LDHEA. Phase diagrams were calculated by constructing the Gibbs-free energy convex hull based on easily accessible enthalpy and entropy terms. Surprisingly, accuracy was 78%. Pursuing these LDHEA candidates by high-throughput experimental methods resulted in SPSS LDHEA composed of transition metals (e.g. Cr, Mn, Fe, Ni, Cu) alloyed with Al, yet the high concentration of Al, necessary to bring the mass density below 5.0g/cc, makes these materials hard and brittle, body-centered-cubic (BCC) alloys. A related, yet multi-phase BCC alloy, based on Al-Cr-Fe-Ni, shows compressive strain >10% and specific compressive yield strength of 229 MPa x cc/g, yet does not show ductility in tensile tests due to cleavage. When replacing Cr in Al-Cr-Fe-based 4- and 5-element LDHEA with Mn, hardness drops 2x. Combined with compression test results, including those on the ternaries Al-Cr-Fe and Al-Mn-Fe suggest that Al-Mn-Fe-based LDHEA are still worth pursuing. These initial results only represent one compressive stress-strain curve per composition without any property optimization. As such, reproducibility needs to be followed by optimization to show their full potential. When including Li, Mg, and Zn, single-phase Li-Mg-Al-Ti-Zn LDHEA has been found with a specific ultimate compressive strength of 289MPa x cc/g. Al-Ti-Mn-Zn showed a specific ultimate compressive strength of 73MPa x cc/g. These initial results after hot isostatic pressing (HIP) of the ball-milled powders represent the lower end of what is possible, since no secondary processing (e.g. extrusion) has been performed to optimize strength and ductility. Compositions for multi-phase (e.g. dual-phase) LDHEA were identified largely by automated searches through CALPHAD databases, while screening for large face-centered-cubic (FCC) volume fractions, followed by experimental verification. This resulted in several new alloys. Li-Mg-Al-Mn-Fe and Mg-Mn-Fe-Co ball-milled powders upon HIP show specific ultimate compressive strengths of 198MPa x cc/g and 45MPa x cc/g, respectively. Several malleable quarternary Al-Zn-based alloys have been found upon arc/induction melting, yet with limited specific compressive yield strength (<75 MPa x cc/g). These initial results are all without any optimization for strength and/or ductility. High-throughput experimentation allowed us to triple the existing experimental HEA database as published in the past 10 years in less than 2 years which happened at a rate 10x higher than previous methods. Furthermore, we showed that high-throughput thin-film combinatorial methods can be used to get insight in isothermal phase diagram slices. Although it is straightforward to map hardness as a function of composition for sputtered, thin-film, compositional gradients by nano-indentation and compare the results to micro-indentation on bulk samples, the simultaneous impact of composition, roughness, film density, and microstructure on hardness requires monitoring all these properties as a function of location on the compositional gradient, including dissecting the impact of these 4 factors on the hardness map. These additional efforts impact throughput significantly. This work shows that a lot of progress has been made over the years in predicting phase formation that aids the discovery of new alloys, yet that a lot of work needs to be done to predict phases more accurately for LDHEA, whether done by CALPHAD or by other means. More importantly, more work needs to be done to predict mechanical properties of novel alloys, like yield strength, and ductility. Furthermore, this work shows that there is a need for the generation of an empirical alloy database covering strategic points in a multi-dimensional composition space to allow for faster and more accurate predictive interpolations to identify the oasis in the dessert more quickly. Finally, this work suggests that it is worth pursuing a ductile alloy with a SYS > 300 MPa x cc/g in a mass density range of 6-7 g/cc, since the chances for a single-phase or majority-phase FCC increase significantly. Today’s lightweight steels are in this density range.« less
Characterizing the genetic structure of a forensic DNA database using a latent variable approach.
Kruijver, Maarten
2016-07-01
Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Naumann, G.; Barbosa, P.; Garrote, L.; Iglesias, A.; Vogt, J.
2014-05-01
We propose a composite drought vulnerability indicator (DVI) that reflects different aspects of drought vulnerability evaluated at Pan-African level for four components: the renewable natural capital, the economic capacity, the human and civic resources, and the infrastructure and technology. The selection of variables and weights reflects the assumption that a society with institutional capacity and coordination, as well as with mechanisms for public participation, is less vulnerable to drought; furthermore, we consider that agriculture is only one of the many sectors affected by drought. The quality and accuracy of a composite indicator depends on the theoretical framework, on the data collection and quality, and on how the different components are aggregated. This kind of approach can lead to some degree of scepticism; to overcome this problem a sensitivity analysis was done in order to measure the degree of uncertainty associated with the construction of the composite indicator. Although the proposed drought vulnerability indicator relies on a number of theoretical assumptions and some degree of subjectivity, the sensitivity analysis showed that it is a robust indicator and hence able of representing the complex processes that lead to drought vulnerability. According to the DVI computed at country level, the African countries classified with higher relative vulnerability are Somalia, Burundi, Niger, Ethiopia, Mali and Chad. The analysis of the renewable natural capital component at sub-basin level shows that the basins with high to moderate drought vulnerability can be subdivided into the following geographical regions: the Mediterranean coast of Africa; the Sahel region and the Horn of Africa; the Serengeti and the Eastern Miombo woodlands in eastern Africa; the western part of the Zambezi Basin, the southeastern border of the Congo Basin, and the belt of Fynbos in the Western Cape province of South Africa. The results of the DVI at the country level were compared with drought disaster information from the EM-DAT disaster database. Even if a cause-effect relationship cannot be established between the DVI and the drought disaster database, a good agreement is observed between the drought vulnerability maps and the number of persons affected by droughts. These results are expected to contribute to the discussion on how to assess drought vulnerability and hopefully contribute to the development of drought early warning systems in Africa.
Alternative treatment technology information center computer database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, D.
1995-10-01
The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less
Thermal Signature Measurements for Ammonium Nitrate/Fuel Mixtures by Laser Heating.
Nazarian, Ashot; Presser, Cary
2016-01-10
Measurements were carried out to obtain thermal signatures of several ammonium nitrate/fuel (ANF) mixtures, using a laser-heating technique referred to as the laser-driven thermal reactor (LDTR). The mixtures were ammonium nitrate (AN)/kerosene, AN/ethylene glycol, AN/paraffin wax, AN/petroleum jelly, AN/confectioner's sugar, AN/cellulose (tissue paper), nitromethane/cellulose, nitrobenzene/cellulose, AN/cellulose/nitromethane, AN/cellulose/nitrobenzene. These mixtures were also compared with AN/nitromethane and AN/diesel fuel oil, obtained from an earlier investigation. Thermograms for the mixtures, as well as individual constituents, were compared to better understand how the sample thermal signature changes with mixture composition. This is the first step in development of a thermal-signature database, to be used along with other signature databases, to improve identification of energetic substances of unknown composition. The results indicated that each individual thermal signature was associated unambiguously with a particular mixture composition. The signature features of a particular mixture were shaped by the individual constituent signatures. It was also uncovered that the baseline signature was modified after an experiment due to coating of unreacted residue on the substrate surface and a change in the reactor sphere oxide layer. Thus, care was required to pre-oxidize the sphere prior to an experiment. A minimum sample mass (which was dependent on composition) was required to detect the signature characteristics. Increased laser power served to magnify signal strength while preserving the signature features. For the mixtures examined, the thermal response of each ANF mixture was found to be different, which was based on the mixture composition and the thermal behavior of each mixture constituent.
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.
Arita, Masanori; Suwa, Kazuhiro
2008-09-17
In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database
Arita, Masanori; Suwa, Kazuhiro
2008-01-01
Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113
Design of special purpose database for credit cooperation bank business processing network system
NASA Astrophysics Data System (ADS)
Yu, Yongling; Zong, Sisheng; Shi, Jinfa
2011-12-01
With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.
Construction of a Linux based chemical and biological information system.
Molnár, László; Vágó, István; Fehér, András
2003-01-01
A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.
NASA Astrophysics Data System (ADS)
Meric de Bellefon, G.; van Duysen, J. C.; Sridharan, K.
2017-08-01
The stacking fault energy (SFE) plays an important role in deformation behavior and radiation damage of FCC metals and alloys such as austenitic stainless steels. In the present communication, existing expressions to calculate SFE in those steels from chemical composition are reviewed and an improved multivariate linear regression with random intercepts is used to analyze a new database of 144 SFE measurements collected from 30 literature references. It is shown that the use of random intercepts can account for experimental biases in these literature references. A new expression to predict SFE from austenitic stainless steel compositions is proposed.
Emissivity Results on High Temperature Coatings for Refractory Composite Materials
NASA Technical Reports Server (NTRS)
Ohlhorst, Craig W.; Vaughn, Wallace L.; Daryabeigi, Kamran; Lewis, Ronald K.; Rodriguez, Alvaro C.; Milhoan, James D.; Koenig, John R.
2007-01-01
The directional emissivity of various refractory composite materials considered for application for reentry and hypersonic vehicles was investigated. The directional emissivity was measured at elevated temperatures of up to 3400 F using a directional spectral radiometric technique during arc-jet test runs. A laboratory-based relative total radiance method was also used to measure total normal emissivity of some of the refractory composite materials. The data from the two techniques are compared. The paper will also compare the historical database of Reinforced Carbon-Carbon emissivity measurements with emissivity values generated recently on the material using the two techniques described in the paper.
Cuevas, Francisco Julián; Moreno-Rojas, José Manuel; Ruiz-Moreno, María José
2017-04-15
A targeted approach using HS-SPME-GC-MS was performed to compare flavour compounds of 'Navelina' and 'Salustiana' orange cultivars from organic and conventional management systems. Both varieties of conventional oranges showed higher content of ester compounds. On the other hand, higher content of some compounds related with the geranyl-diphosphate pathway (neryl and geranyl acetates) and some terpenoids were found in the organic samples. Furthermore, the partial least square discriminant analysis (PLS-DA) achieved an effective classification for oranges based on the farming system using their volatile profiles (90 and 100% correct classification). To our knowledge, it is the first time that a comparative study dealing with farming systems and orange aroma profile has been performed. These new insights, taking into account local databases, cultivars and advanced analytical tools, highlight the potential of volatile composition for organic orange discrimination. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Construction of image database for newspapaer articles using CTS
NASA Astrophysics Data System (ADS)
Kamio, Tatsuo
Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.
Estimation of pyrethroid pesticide intake using regression ...
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... Terrorist Screening Database System of Records AGENCY: Privacy Office, DHS. ACTION: Notice of proposed... Use of the Terrorist Screening Database System of Records'' and this proposed rulemaking. In this... Use of the Terrorist Screening Database (TSDB) System of Records.'' DHS is maintaining a mirror copy...
Social Science Data Bases and Data Banks in the United States and Canada.
ERIC Educational Resources Information Center
Black, John B.
This overview of North American social science databases, including scope and services, identifies five trends: (1) growth--in the number of databases, subjects covered, and system availability; (2) increased competition in the retrieval systems marketplace with more databases being offered on multiple systems, improvements being made to the…
Tephrabase: A tephrochronological data
NASA Astrophysics Data System (ADS)
Newton, Anthony
2015-04-01
Development of Tephrabase, a tephrochronological database,, began over 20 years ago and was it launched in June 1995 as one of the earliest scientific databases on the web. Tephrabase was designed from the start to include a wide range of tephrochronological data including location, depth of the layer, geochemical composition (major to trace elements), physical properties (colour, grainsize, and mineral components), dating (both absolute/historical and radiometric), details of eruptions and the history of volcanic centres, as well as a reference database. Currently, Tephrabase contains details of over 1000 sites where tephra layers have been found, 3500 tephra layers, 3500 geochemical analyses and 2500 references. Tephrabase was originally developed to include tephra layers in Iceland and those of Icelandic origin found in NW Europe, it also now includes data on tephra layers from central Mexico and from the Laacher See eruption. The latter was developed as a supplement to the Iceland-centric nature of the rest of Tephrabase. A further extension to Tephrabase has seen the development of an automated method of producing tephra stratigraphic columns, calculating sediment accumulation rates between dated tephra layers in multiple profiles and mapping tephra layers across the landscape. Whilst Tephrabase has been successful and continues to be developed and updated, there are several issues which need to be. More tephrochronological databases need to be developed and these should allow connected/shared searches. This would provide worldwide coverage, but also the flexibility to develop spin off small-scale extensions, such as those described above. Data uploading needs to be improved and simplified. This includes the need to clarify issues of quality control. Again, a common standards led approach to this seems appropriate. Researchers also need to be encouraged to contribute data to these databases. Tephrabase was designed to include a variety of data, including physical properties and trace element compositions of the tephra layers. However, Tephrabase is conspicuous by not containing these data. Tephrabase and other databases need to include these. Tephra databases need to not only record details about tephra layers, but should also be tools to understand environmental change and understand volcanic histories. These can be achieved through development of databases themselves and through the creations of portals which draw data from multiple data sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Richard A.; Panyala, Ajay R.; Glass, Kevin A.
MerCat is a parallel, highly scalable and modular property software package for robust analysis of features in next-generation sequencing data. MerCat inputs include assembled contigs and raw sequence reads from any platform resulting in feature abundance counts tables. MerCat allows for direct analysis of data properties without reference sequence database dependency commonly used by search tools such as BLAST and/or DIAMOND for compositional analysis of whole community shotgun sequencing (e.g. metagenomes and metatranscriptomes).
Cycom 977-2 Composite Material: Impact Test Results (workshop presentation)
NASA Technical Reports Server (NTRS)
Engle, Carl; Herald, Stephen; Watkins, Casey
2005-01-01
Contents include the following: Ambient (13A) tests of Cycom 977-2 impact characteristics by the Brucenton and statistical method at MSFC and WSTF. Repeat (13A) tests of tested Cycom from phase I at MSFC to expended testing statistical database. Conduct high-pressure tests (13B) in liquid oxygen (LOX) and GOX at MSFC and WSTF to determine Cycom reaction characteristics and batch effect. Conduct expended ambient (13A) LOX test at MSFC and high-pressure (13B) testing to determine pressure effects in LOX. Expend 13B GOX database.
Metagenomics of prebiotic and probiotic supplemented broilers gastrointestinal tract microbiome
USDA-ARS?s Scientific Manuscript database
Phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) is a recently developed computational approach for prediction of functional composition of a microbiome comparing marker gene data with a reference genome database. The procedure established significant link ...
Personal Database Management System I TRIAS
NASA Astrophysics Data System (ADS)
Yamamoto, Yoneo; Kashihara, Akihiro; Kawagishi, Keisuke
The current paper provides TRIAS (TRIple Associative System) which is a database management system for a personal use. In order to implement TRIAS, we have developed an associative database, whose format is (e,a,v) : e for entity, a for attribute, v for value. ML-TREE is used to construct (e,a,v). ML-TREE is a reversion of B+-tree that is multiway valanced tree. The paper focuses mainly on the usage of associative database, demonstrating how to use basic commands, primary functions and applcations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kogalovskii, M.R.
This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.
Development and Implementation of Kumamoto Technopolis Regional Database T-KIND
NASA Astrophysics Data System (ADS)
Onoue, Noriaki
T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.
NASA Technical Reports Server (NTRS)
Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William
2010-01-01
A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.
A comparison of database systems for XML-type data.
Risse, Judith E; Leunissen, Jack A M
2010-01-01
In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.
2007-01-01
and a phenolic -resin based polymeric matrix. Such armor panels offer superior protection against fragmented ballistic threats when compared to...database does not contain a material model for the HJ1 composite but provides a model for a Kevlar Fiber Reinforced Polymer (KFRP) containing 53 vol... phenolic resin and epoxy yield stresses and then with a ratio of the S-2 glass and aramid fibers volume fractions. To test the validity of the
NASA Technical Reports Server (NTRS)
Fegley, Bruce, Jr. (Editor); Waenke, Heinrich (Editor)
1992-01-01
The speakers in the first session of the workshop addressed some of the continuing enigmas regarding the atmospheric composition, surface composition, and atmosphere-surface interactions on Mars; provided a description of a database of proposed payloads and instruments for SEI missions that is scheduled to be accessible in 1993; discussed potential uses of atmospheric imaging from landed stations on Mars; and advocated the collection and employment of high-spectral-resolution reflectance and emission data.
MMA-EoS: A Computational Framework for Mineralogical Thermodynamics
NASA Astrophysics Data System (ADS)
Chust, T. C.; Steinle-Neumann, G.; Dolejš, D.; Schuberth, B. S. A.; Bunge, H.-P.
2017-12-01
We present a newly developed software framework, MMA-EoS, that evaluates phase equilibria and thermodynamic properties of multicomponent systems by Gibbs energy minimization, with application to mantle petrology. The code is versatile in terms of the equation-of-state and mixing properties and allows for the computation of properties of single phases, solution phases, and multiphase aggregates. Currently, the open program distribution contains equation-of-state formulations widely used, that is, Caloric-Murnaghan, Caloric-Modified-Tait, and Birch-Murnaghan-Mie-Grüneisen-Debye models, with published databases included. Through its modular design and easily scripted database, MMA-EoS can readily be extended with new formulations of equations-of-state and changes or extensions to thermodynamic data sets. We demonstrate the application of the program by reproducing and comparing physical properties of mantle phases and assemblages with previously published work and experimental data, successively increasing complexity, up to computing phase equilibria of six-component compositions. Chemically complex systems allow us to trace the budget of minor chemical components in order to explore whether they lead to the formation of new phases or extend stability fields of existing ones. Self-consistently computed thermophysical properties for a homogeneous mantle and a mechanical mixture of slab lithologies show no discernible differences that require a heterogeneous mantle structure as has been suggested previously. Such examples illustrate how thermodynamics of mantle mineralogy can advance the study of Earth's interior.
Draft secure medical database standard.
Pangalos, George
2002-01-01
Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.
Development of Databases on Iodine in Foods and Dietary Supplements
Ershow, Abby G.; Skeaff, Sheila A.; Merkel, Joyce M.; Pehrsson, Pamela R.
2018-01-01
Iodine is an essential micronutrient required for normal growth and neurodevelopment; thus, an adequate intake of iodine is particularly important for pregnant and lactating women, and throughout childhood. Low levels of iodine in the soil and groundwater are common in many parts of the world, often leading to diets that are low in iodine. Widespread salt iodization has eradicated severe iodine deficiency, but mild-to-moderate deficiency is still prevalent even in many developed countries. To understand patterns of iodine intake and to develop strategies for improving intake, it is important to characterize all sources of dietary iodine, and national databases on the iodine content of major dietary contributors (including foods, beverages, water, salts, and supplements) provide a key information resource. This paper discusses the importance of well-constructed databases on the iodine content of foods, beverages, and dietary supplements; the availability of iodine databases worldwide; and factors related to variability in iodine content that should be considered when developing such databases. We also describe current efforts in iodine database development in the United States, the use of iodine composition data to develop food fortification policies in New Zealand, and how iodine content databases might be used when considering the iodine intake and status of individuals and populations. PMID:29342090
High-throughput STR analysis for DNA database using direct PCR.
Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan
2013-07-01
Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
A UML Profile for Developing Databases that Conform to the Third Manifesto
NASA Astrophysics Data System (ADS)
Eessaar, Erki
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.