Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
Hein, Misty J.; Waters, Martha A.; Ruder, Avima M.; Stenzel, Mark R.; Blair, Aaron; Stewart, Patricia A.
2010-01-01
Objectives: Occupational exposure assessment for population-based case–control studies is challenging due to the wide variety of industries and occupations encountered by study participants. We developed and evaluated statistical models to estimate the intensity of exposure to three chlorinated solvents—methylene chloride, 1,1,1-trichloroethane, and trichloroethylene—using a database of air measurement data and associated exposure determinants. Methods: A measurement database was developed after an extensive review of the published industrial hygiene literature. The database of nearly 3000 measurements or summary measurements included sample size, measurement characteristics (year, duration, and type), and several potential exposure determinants associated with the measurements: mechanism of release (e.g. evaporation), process condition, temperature, usage rate, type of ventilation, location, presence of a confined space, and proximity to the source. The natural log-transformed measurement levels in the exposure database were modeled as a function of the measurement characteristics and exposure determinants using maximum likelihood methods. Assuming a single lognormal distribution of the measurements, an arithmetic mean exposure intensity level was estimated for each unique combination of exposure determinants and decade. Results: The proportions of variability in the measurement data explained by the modeled measurement characteristics and exposure determinants were 36, 38, and 54% for methylene chloride, 1,1,1-trichloroethane, and trichloroethylene, respectively. Model parameter estimates for the exposure determinants were in the anticipated direction. Exposure intensity estimates were plausible and exhibited internal consistency, but the ability to evaluate validity was limited. Conclusions: These prediction models can be used to estimate chlorinated solvent exposure intensity for jobs reported by population-based case–control study participants that have sufficiently detailed information regarding the exposure determinants. PMID:20418277
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.
NASA Astrophysics Data System (ADS)
Berner, L. T.; Law, B. E.
2015-12-01
Plant traits include physiological, morphological, and biogeochemical characteristics that in combination determine a species sensitivity to environmental conditions. Standardized, co-located, and geo-referenced species- and plot-level measurements are needed to address variation in species sensitivity to climate change impacts and for ecosystem process model development, parameterization and testing. We present a new database of plant trait, forest carbon cycling, and soil property measurements derived from multiple TERRA-PNW projects in the Pacific Northwest US, spanning 2000-2014. The database includes measurements from over 200 forest plots across Oregon and northern California, where the data were explicitly collected for scaling and modeling regional terrestrial carbon processes with models such as Biome-BGC and the Community Land Model. Some of the data are co-located at AmeriFlux sites in the region. The database currently contains leaf trait measurements (specific leaf area, leaf longevity, leaf carbon and nitrogen) from over 1,200 branch samples and 30 species, as well as plot-level biomass and productivity components, and soil carbon and nitrogen. Standardized protocols were used across projects, as summarized in an FAO protocols document. The database continues to expand and will include agricultural crops. The database will be hosted by the Oak Ridge National Laboratory (ORLN) Distributed Active Archive Center (DAAC). We hope that other regional databases will become publicly available to help enable Earth System Modeling to simulate species-level sensitivity to climate at regional to global scales.
A database of aerothermal measurements in hypersonic flow for CFD validation
NASA Technical Reports Server (NTRS)
Holden, M. S.; Moselle, J. R.
1992-01-01
This paper presents an experimental database selected and compiled from aerothermal measurements obtained on basic model configurations on which fundamental flow phenomena could be most easily examined. The experimental studies were conducted in hypersonic flows in 48-inch, 96-inch, and 6-foot shock tunnels. A special computer program was constructed to provide easy access to the measurements in the database as well as the means to plot the measurements and compare them with imported data. The database contains tabulations of model configurations, freestream conditions, and measurements of heat transfer, pressure, and skin friction for each of the studies selected for inclusion. The first segment contains measurements in laminar flow emphasizing shock-wave boundary-layer interaction. In the second segment, measurements in transitional flows over flat plates and cones are given. The third segment comprises measurements in regions of shock-wave/turbulent-boundary-layer interactions. Studies of the effects of surface roughness of nosetips and conical afterbodies are presented in the fourth segment of the database. Detailed measurements in regions of shock/shock boundary layer interaction are contained in the fifth segment. Measurements in regions of wall jet and transpiration cooling are presented in the final two segments.
A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.
Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm
2006-01-01
There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.
Burn Injury Assessment Tool with Morphable 3D Human Body Models
2017-04-21
waist, arms and legs measurements) as stored in most anthropometry databases . To improve on bum area estimations, the bum tool will allow the user to...different algorithm for morphing that relies on searching of an extensive anthropometric database , which is created from thousands of randomly...interpolation methods are required. Develop Patient Database : Patient data entered (name, gender, age, anthropometric measurements), collected (photographic
High-quality unsaturated zone hydraulic property data for hydrologic applications
Perkins, Kimberlie; Nimmo, John R.
2009-01-01
In hydrologic studies, especially those using dynamic unsaturated zone moisture modeling, calculations based on property transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values has become increasingly common with the use of neural networks. High-quality data are needed for databases used in this way and for theoretical and property transfer model development and testing. Hydraulic properties predicted on the basis of existing databases may be adequate in some applications but not others. An obvious problem occurs when the available database has few or no data for samples that are closely related to the medium of interest. The data set presented in this paper includes saturated and unsaturated hydraulic conductivity, water retention, particle-size distributions, and bulk properties. All samples are minimally disturbed, all measurements were performed using the same state of the art techniques and the environments represented are diverse.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
The Monitoring Erosion of Agricultural Land and spatial database of erosion events
NASA Astrophysics Data System (ADS)
Kapicka, Jiri; Zizala, Daniel
2013-04-01
In 2011 originated in The Czech Republic The Monitoring Erosion of Agricultural Land as joint project of State Land Office (SLO) and Research Institute for Soil and Water Conservation (RISWC). The aim of the project is collecting and record keeping information about erosion events on agricultural land and their evaluation. The main idea is a creation of a spatial database that will be source of data and information for evaluation and modeling erosion process, for proposal of preventive measures and measures to reduce negative impacts of erosion events. A subject of monitoring is the manifestations of water erosion, wind erosion and slope deformation in which cause damaged agriculture land. A website, available on http://me.vumop.cz, is used as a tool for keeping and browsing information about monitored events. SLO employees carry out record keeping. RISWC is specialist institute in the Monitoring Erosion of Agricultural Land that performs keeping the spatial database, running the website, managing the record keeping of events, analysis the cause of origins events and statistical evaluations of keeping events and proposed measures. Records are inserted into the database using the user interface of the website which has map server as a component. Website is based on database technology PostgreSQL with superstructure PostGIS and MapServer UMN. Each record is in the database spatial localized by a drawing and it contains description information about character of event (data, situation description etc.) then there are recorded information about land cover and about grown crops. A part of database is photodocumentation which is taken in field reconnaissance which is performed within two days after notify of event. Another part of database are information about precipitations from accessible precipitation gauges. Website allows to do simple spatial analysis as are area calculation, slope calculation, percentage representation of GAEC etc.. Database structure was designed on the base of needs analysis inputs to mathematical models. Mathematical models are used for detailed analysis of chosen erosion events which include soil analysis. Till the end 2012 has had the database 135 events. The content of database still accrues and gives rise to the extensive source of data that is usable for testing mathematical models.
Integration of Web-based and PC-based clinical research databases.
Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M
2004-01-01
We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.
Das, Dhrubajyoti D.; St. John, Peter C.; McEnally, Charles S.; ...
2017-12-27
Databases of sooting indices, based on measuring some aspect of sooting behavior in a standardized combustion environment, are useful in providing information on the comparative sooting tendencies of different fuels or pure compounds. However, newer biofuels have varied chemical structures including both aromatic and oxygenated functional groups, which expands the chemical space of relevant compounds. In this work, we propose a unified sooting tendency database for pure compounds, including both regular and oxygenated hydrocarbons, which is based on combining two disparate databases of yield-based sooting tendency measurements in the literature. Unification of the different databases was made possible by leveragingmore » the greater dynamic range of the color ratio pyrometry soot diagnostic. This unified database contains a substantial number of pure compounds (≥ 400 total) from multiple categories of hydrocarbons important in modern fuels and establishes the sooting tendencies of aromatic and oxygenated hydrocarbons on the same numeric scale for the first time. Then, using this unified sooting tendency database, we have developed a predictive model for sooting behavior applicable to a broad range of hydrocarbons and oxygenated hydrocarbons. The model decomposes each compound into single-carbon fragments and assigns a sooting tendency contribution to each fragment based on regression against the unified database. The model’s predictive accuracy (as demonstrated by leave-one-out cross-validation) is comparable to a previously developed, more detailed predictive model. The fitted model provides insight into the effects of chemical structure on soot formation, and cases where its predictions fail reveal the presence of more complicated kinetic sooting mechanisms. Our work will therefore enable the rational design of low-sooting fuel blends from a wide range of feedstocks and chemical functionalities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Dhrubajyoti D.; St. John, Peter C.; McEnally, Charles S.
Databases of sooting indices, based on measuring some aspect of sooting behavior in a standardized combustion environment, are useful in providing information on the comparative sooting tendencies of different fuels or pure compounds. However, newer biofuels have varied chemical structures including both aromatic and oxygenated functional groups, which expands the chemical space of relevant compounds. In this work, we propose a unified sooting tendency database for pure compounds, including both regular and oxygenated hydrocarbons, which is based on combining two disparate databases of yield-based sooting tendency measurements in the literature. Unification of the different databases was made possible by leveragingmore » the greater dynamic range of the color ratio pyrometry soot diagnostic. This unified database contains a substantial number of pure compounds (≥ 400 total) from multiple categories of hydrocarbons important in modern fuels and establishes the sooting tendencies of aromatic and oxygenated hydrocarbons on the same numeric scale for the first time. Then, using this unified sooting tendency database, we have developed a predictive model for sooting behavior applicable to a broad range of hydrocarbons and oxygenated hydrocarbons. The model decomposes each compound into single-carbon fragments and assigns a sooting tendency contribution to each fragment based on regression against the unified database. The model’s predictive accuracy (as demonstrated by leave-one-out cross-validation) is comparable to a previously developed, more detailed predictive model. The fitted model provides insight into the effects of chemical structure on soot formation, and cases where its predictions fail reveal the presence of more complicated kinetic sooting mechanisms. Our work will therefore enable the rational design of low-sooting fuel blends from a wide range of feedstocks and chemical functionalities.« less
Detecting signals of drug-drug interactions in a spontaneous reports database.
Thakrar, Bharat T; Grundschober, Sabine Borel; Doessegger, Lucette
2007-10-01
The spontaneous reports database is widely used for detecting signals of ADRs. We have extended the methodology to include the detection of signals of ADRs that are associated with drug-drug interactions (DDI). In particular, we have investigated two different statistical assumptions for detecting signals of DDI. Using the FDA's spontaneous reports database, we investigated two models, a multiplicative and an additive model, to detect signals of DDI. We applied the models to four known DDIs (methotrexate-diclofenac and bone marrow depression, simvastatin-ciclosporin and myopathy, ketoconazole-terfenadine and torsades de pointes, and cisapride-erythromycin and torsades de pointes) and to four drug-event combinations where there is currently no evidence of a DDI (fexofenadine-ketoconazole and torsades de pointes, methotrexade-rofecoxib and bone marrow depression, fluvastatin-ciclosporin and myopathy, and cisapride-azithromycine and torsade de pointes) and estimated the measure of interaction on the two scales. The additive model correctly identified all four known DDIs by giving a statistically significant (P < 0.05) positive measure of interaction. The multiplicative model identified the first two of the known DDIs as having a statistically significant or borderline significant (P < 0.1) positive measure of interaction term, gave a nonsignificant positive trend for the third interaction (P = 0.27), and a negative trend for the last interaction. Both models correctly identified the four known non interactions by estimating a negative measure of interaction. The spontaneous reports database is a valuable resource for detecting signals of DDIs. In particular, the additive model is more sensitive in detecting such signals. The multiplicative model may further help qualify the strength of the signal detected by the additive model.
Detecting signals of drug–drug interactions in a spontaneous reports database
Thakrar, Bharat T; Grundschober, Sabine Borel; Doessegger, Lucette
2007-01-01
Aims The spontaneous reports database is widely used for detecting signals of ADRs. We have extended the methodology to include the detection of signals of ADRs that are associated with drug–drug interactions (DDI). In particular, we have investigated two different statistical assumptions for detecting signals of DDI. Methods Using the FDA's spontaneous reports database, we investigated two models, a multiplicative and an additive model, to detect signals of DDI. We applied the models to four known DDIs (methotrexate-diclofenac and bone marrow depression, simvastatin-ciclosporin and myopathy, ketoconazole-terfenadine and torsades de pointes, and cisapride-erythromycin and torsades de pointes) and to four drug-event combinations where there is currently no evidence of a DDI (fexofenadine-ketoconazole and torsades de pointes, methotrexade-rofecoxib and bone marrow depression, fluvastatin-ciclosporin and myopathy, and cisapride-azithromycine and torsade de pointes) and estimated the measure of interaction on the two scales. Results The additive model correctly identified all four known DDIs by giving a statistically significant (P< 0.05) positive measure of interaction. The multiplicative model identified the first two of the known DDIs as having a statistically significant or borderline significant (P< 0.1) positive measure of interaction term, gave a nonsignificant positive trend for the third interaction (P= 0.27), and a negative trend for the last interaction. Both models correctly identified the four known non interactions by estimating a negative measure of interaction. Conclusions The spontaneous reports database is a valuable resource for detecting signals of DDIs. In particular, the additive model is more sensitive in detecting such signals. The multiplicative model may further help qualify the strength of the signal detected by the additive model. PMID:17506784
Molecular Oxygen in the Thermosphere: Issues and Measurement Strategies
NASA Astrophysics Data System (ADS)
Picone, J. M.; Hedin, A. E.; Drob, D. P.; Meier, R. R.; Bishop, J.; Budzien, S. A.
2002-05-01
We review the state of empirical knowledge regarding the distribution of molecular oxygen in the lower thermosphere (100-200 km), as embodied by the new NRLMSISE-00 empirical atmospheric model, its predecessors, and the underlying databases. For altitudes above 120 km, the two major classes of data (mass spectrometer and solar ultraviolet [UV] absorption) disagree significantly regarding the magnitude of the O2 density and the dependence on solar activity. As a result, the addition of the Solar Maximum Mission (SMM) data set (based on solar UV absorption) to the NRLMSIS database has directly impacted the new model, increasing the complexity of the model's formulation and generally reducing the thermospheric O2 density relative to MSISE-90. Beyond interest in the thermosphere itself, this issue materially affects detailed models of ionospheric chemistry and dynamics as well as modeling of the upper atmospheric airglow. Because these are key elements of both experimental and operational systems which measure and forecast the near-Earth space environment, we present strategies for augmenting the database through analysis of existing data and through future measurements in order to resolve this issue.
Rapid Prototyping-Unmanned Combat Air Vehicle (UCAV)/Sensorcraft
2008-01-01
model. RP may prove to be the fastest means to create a bridge between these CFD and experimental ground testing databases . In the past, it took...UCAV X-45A wind tunnel model within the /RB) ment FD results provide a database of global surface and off-body measurements. It is imperative t...extend the knowledge database for a given aircraft configuration beyond the ground test envelope and into the fligh regime. Working in tandem, in an
Measurement and modeling of unsaturated hydraulic conductivity
Perkins, Kim S.; Elango, Lakshmanan
2011-01-01
The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.
NASA Technical Reports Server (NTRS)
Morelli, E. A.; Proffitt, M. S.
1999-01-01
The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.
NASA Astrophysics Data System (ADS)
Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian
2017-09-01
SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.
Nohara, Ryuki; Endo, Yui; Murai, Akihiko; Takemura, Hiroshi; Kouchi, Makiko; Tada, Mitsunori
2016-08-01
Individual human models are usually created by direct 3D scanning or deforming a template model according to the measured dimensions. In this paper, we propose a method to estimate all the necessary dimensions (full set) for the human model individualization from a small number of measured dimensions (subset) and human dimension database. For this purpose, we solved multiple regression equation from the dimension database given full set dimensions as the objective variable and subset dimensions as the explanatory variables. Thus, the full set dimensions are obtained by simply multiplying the subset dimensions to the coefficient matrix of the regression equation. We verified the accuracy of our method by imputing hand, foot, and whole body dimensions from their dimension database. The leave-one-out cross validation is employed in this evaluation. The mean absolute errors (MAE) between the measured and the estimated dimensions computed from 4 dimensions (hand length, breadth, middle finger breadth at proximal, and middle finger depth at proximal) in the hand, 2 dimensions (foot length, breadth, and lateral malleolus height) in the foot, and 1 dimension (height) and weight in the whole body are computed. The average MAE of non-measured dimensions were 4.58% in the hand, 4.42% in the foot, and 3.54% in the whole body, while that of measured dimensions were 0.00%.
An Update of the Bodeker Scientific Vertically Resolved, Global, Gap-Free Ozone Database
NASA Astrophysics Data System (ADS)
Kremser, S.; Bodeker, G. E.; Lewis, J.; Hassler, B.
2016-12-01
High vertical resolution ozone measurements from multiple satellite-based instruments have been merged with measurements from the global ozonesonde network to calculate monthly mean ozone values in 5º latitude zones. Ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced approximately 1 km apart (878.4 hPa to 0.046 hPa). These data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to these data and then evaluated globally. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. This presentation reports on updates to an earlier version of the vertically resolved ozone database, including the incorporation of new ozone measurements and new techniques for combining the data. Compared to previous versions of the database, particular attention is paid to avoiding spatial and temporal sampling biases and tracing uncertainties through to the final product. This updated database, developed within the New Zealand Deep South National Science Challenge, is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.
NASA Astrophysics Data System (ADS)
Qiu, Xin; Cheng, Irene; Yang, Fuquan; Horb, Erin; Zhang, Leiming; Harner, Tom
2018-03-01
Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs) in the Athabasca oil sands region (AOSR) were developed. The first database was derived from volatile organic compound (VOC) emissions data provided by the Cumulative Environmental Management Association (CEMA) and the second database was derived from additional data collected within the Joint Canada-Alberta Oil Sands Monitoring (JOSM) program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, and dibenzothiophenes (DBTs), obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model-measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC emissions estimation and speciation methodologies and reducing the uncertainties in VOC emissions which are subsequently used in PAC emissions estimation.
NASA Astrophysics Data System (ADS)
Cheng, Tao; Rivard, Benoit; Sánchez-Azofeifa, Arturo G.; Féret, Jean-Baptiste; Jacquemoud, Stéphane; Ustin, Susan L.
2014-01-01
Leaf mass per area (LMA), the ratio of leaf dry mass to leaf area, is a trait of central importance to the understanding of plant light capture and carbon gain. It can be estimated from leaf reflectance spectroscopy in the infrared region, by making use of information about the absorption features of dry matter. This study reports on the application of continuous wavelet analysis (CWA) to the estimation of LMA across a wide range of plant species. We compiled a large database of leaf reflectance spectra acquired within the framework of three independent measurement campaigns (ANGERS, LOPEX and PANAMA) and generated a simulated database using the PROSPECT leaf optical properties model. CWA was applied to the measured and simulated databases to extract wavelet features that correlate with LMA. These features were assessed in terms of predictive capability and robustness while transferring predictive models from the simulated database to the measured database. The assessment was also conducted with two existing spectral indices, namely the Normalized Dry Matter Index (NDMI) and the Normalized Difference index for LMA (NDLMA). Five common wavelet features were determined from the two databases, which showed significant correlations with LMA (R2: 0.51-0.82, p < 0.0001). The best robustness (R2 = 0.74, RMSE = 18.97 g/m2 and Bias = 0.12 g/m2) was obtained using a combination of two low-scale features (1639 nm, scale 4) and (2133 nm, scale 5), the first being predominantly important. The transferability of the wavelet-based predictive model to the whole measured database was either better than or comparable to those based on spectral indices. Additionally, only the wavelet-based model showed consistent predictive capabilities among the three measured data sets. In comparison, the models based on spectral indices were sensitive to site-specific data sets. Integrating the NDLMA spectral index and the two robust wavelet features improved the LMA prediction. One of the bands used by this spectral index, 1368 nm, was located in a strong atmospheric water absorption region and replacing it with the next available band (1340 nm) led to lower predictive accuracies. However, the two wavelet features were not affected by data quality in the atmospheric absorption regions and therefore showed potential for canopy-level investigations. The wavelet approach provides a different perspective into spectral responses to LMA variation than the traditional spectral indices and holds greater promise for implementation with airborne or spaceborne imaging spectroscopy data for mapping canopy foliar dry biomass.
Development of database of real-world diesel vehicle emission factors for China.
Shen, Xianbao; Yao, Zhiliang; Zhang, Qiang; Wagner, David Vance; Huo, Hong; Zhang, Yingzhi; Zheng, Bo; He, Kebin
2015-05-01
A database of real-world diesel vehicle emission factors, based on type and technology, has been developed following tests on more than 300 diesel vehicles in China using a portable emission measurement system. The database provides better understanding of diesel vehicle emissions under actual driving conditions. We found that although new regulations have reduced real-world emission levels of diesel trucks and buses significantly for most pollutants in China, NOx emissions have been inadequately controlled by the current standards, especially for diesel buses, because of bad driving conditions in the real world. We also compared the emission factors in the database with those calculated by emission factor models and used in inventory studies. The emission factors derived from COPERT (Computer Programmer to calculate Emissions from Road Transport) and MOBILE may both underestimate real emission factors, whereas the updated COPERT and PART5 (Highway Vehicle Particulate Emission Modeling Software) models may overestimate emission factors in China. Real-world measurement results and emission factors used in recent emission inventory studies are inconsistent, which has led to inaccurate estimates of emissions from diesel trucks and buses over recent years. This suggests that emission factors derived from European or US-based models will not truly represent real-world emissions in China. Therefore, it is useful and necessary to conduct systematic real-world measurements of vehicle emissions in China in order to obtain the optimum inputs for emission inventory models. Copyright © 2015. Published by Elsevier B.V.
The MAREDAT Global Database of High Performance Liquid Chromatography Marine Pigment Measurements
NASA Technical Reports Server (NTRS)
Peloquin, J.; Swan, C.; Gruber, N.; Vogt, M.; Claustre, H.; Ras, J.; Uitz, J.; Barlow, R.; Behrenfeld, M.; Bidigare, R.;
2013-01-01
A global pigment database consisting of 35 634 pigment suites measured by high performance liquid chromatography was assembled in support of the MARine Ecosytem DATa (MAREDAT) initiative. These data originate from 136 field surveys within the global ocean, were solicited from investigators and databases, compiled, and then quality controlled. Nearly one quarter of the data originates from the Laboratoire d'Oc´eanographie de Villefranche (LOV), with an additional 17% and 19% stemming from the US JGOFS and LTER programs, respectively. The MAREDAT pigment database provides high quality measurements of the major taxonomic pigments including chlorophylls a and b, 19'-butanoyloxyfucoxanthin, 19'- hexanoyloxyfucoxanthin, alloxanthin, divinyl chlorophyll a, fucoxanthin, lutein, peridinin, prasinoxanthin, violaxanthin and zeaxanthin, which may be used in varying combinations to estimate phytoplankton community composition. Quality control measures consisted of flagging samples that had a total chlorophyll a concentration of zero, had fewer than four reported accessory pigments, or exceeded two standard deviations of the log-linear regression of total chlorophyll a with total accessory pigment concentrations. We anticipate the MAREDAT pigment database to be of use in the marine ecology, remote sensing and ecological modeling communities, where it will support model validation and advance our global perspective on marine biodiversity. The original dataset together with quality control flags as well as the gridded MAREDAT pigment data may be downloaded from PANGAEA: http://doi.pangaea.de/10. 1594/PANGAEA.793246.
Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F
2013-08-01
To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.
NUCFRG2: An evaluation of the semiempirical nuclear fragmentation database
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Cucinotta, F. A.; Shinn, J. L.; Badavi, F. F.; Chun, S. Y.; Norbury, J. W.; Zeitlin, C. J.; Heilbronn, L.; Miller, J.
1995-01-01
A semiempirical abrasion-ablation model has been successful in generating a large nuclear database for the study of high charge and energy (HZE) ion beams, radiation physics, and galactic cosmic ray shielding. The cross sections that are generated are compared with measured HZE fragmentation data from various experimental groups. A research program for improvement of the database generator is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruger, Albert A.; Muller, I.; Gilbo, K.
2013-11-13
The objectives of this work are aimed at the development of enhanced LAW propertycomposition models that expand the composition region covered by the models. The models of interest include PCT, VHT, viscosity and electrical conductivity. This is planned as a multi-year effort that will be performed in phases with the objectives listed below for the current phase. Incorporate property- composition data from the new glasses into the database. Assess the database and identify composition spaces in the database that need augmentation. Develop statistically-designed composition matrices to cover the composition regions identified in the above analysis. Preparemore » crucible melts of glass compositions from the statistically-designed composition matrix and measure the properties of interest. Incorporate the above property-composition data into the database. Assess existing models against the complete dataset and, as necessary, start development of new models.« less
Development of a System Model for Non-Invasive Quantification of Bilirubin in Jaundice Patients
NASA Astrophysics Data System (ADS)
Alla, Suresh K.
Neonatal jaundice is a medical condition which occurs in newborns as a result of an imbalance between the production and elimination of bilirubin. Excess bilirubin in the blood stream diffuses into the surrounding tissue leading to a yellowing of the skin. An optical system integrated with a signal processing system is used as a platform to noninvasively quantify bilirubin concentration through the measurement of diffuse skin reflectance. Initial studies have lead to the generation of a clinical analytical model for neonatal jaundice which generates spectral reflectance data for jaundiced skin with varying levels of bilirubin concentration in the tissue. The spectral database built using the clinical analytical model is then used as a test database to validate the signal processing system in real time. This evaluation forms the basis for understanding the translation of this research to human trials. The clinical analytical model and signal processing system have been successful validated on three spectral databases. First spectral database is constructed using a porcine model as a surrogate for neonatal skin tissue. Samples of pig skin were soaked in bilirubin solutions of varying concentrations to simulate jaundice skin conditions. The resulting skins samples were analyzed with our skin reflectance systems producing bilirubin concentration values that show a high correlation (R2 = 0.94) to concentration of the bilirubin solution that each porcine tissue sample is soaked in. The second spectral database is the spectral measurements collected on human volunteers to quantify the different chromophores and other physical properties of the tissue such a Hematocrit, Hemoglobin etc. The third spectral database is the spectral data collected at different time periods from the moment a bruise is induced.
ERIC Educational Resources Information Center
Deutsch, Donald R.
This report describes a research effort that was carried out over a period of several years to develop and demonstrate a methodology for evaluating proposed Database Management System designs. The major proposition addressed by this study is embodied in the thesis statement: Proposed database management system designs can be evaluated best through…
Seismic Search Engine: A distributed database for mining large scale seismic data
NASA Astrophysics Data System (ADS)
Liu, Y.; Vaidya, S.; Kuzma, H. A.
2009-12-01
The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.
Power Plant Model Validation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less
NASA MEaSUREs Combined ASTER and MODIS Emissivity over Land (CAMEL)
NASA Astrophysics Data System (ADS)
Borbas, E. E.; Hulley, G. C.; Feltz, M.; Knuteson, R. O.; Hook, S. J.
2016-12-01
A land surface emissivity product of the NASA MEASUREs project called Combined ASTER and MODIS Emissivity over Land (CAMEL) is being made available as part of the Unified and Coherent Land Surface Temperature and Emissivity (LST&E) Earth System Data Record (ESDR). The CAMEL database has been created by merging the UW MODIS-based baseline-fit emissivity database (UWIREMIS) developed at the University of Wisconsin-Madison, and the ASTER Global Emissivity Database (ASTER GED V4) produced at JPL. This poster will introduce the beta version of the database, which is available globally for the period 2003 through 2015 at 5km in mean monthly time-steps and for 13 bands from 3.6-14.3 micron. An algorithm to create a high spectral emissivity on 417 wavenumbers is also provided for high spectral IR applications. On the poster the CAMEL database has been evaluated with the IASI Emissivity Atlas (Zhou et al, 2010) and laboratory measurements, and also through simulation of IASI BTs in the RTTOV Forward model.
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
A Maneuvering Flight Noise Model for Helicopter Mission Planning
NASA Technical Reports Server (NTRS)
Greenwood, Eric; Rau, Robert; May, Benjamin; Hobbs, Christopher
2015-01-01
A new model for estimating the noise radiation during maneuvering flight is developed in this paper. The model applies the Quasi-Static Acoustic Mapping (Q-SAM) method to a database of acoustic spheres generated using the Fundamental Rotorcraft Acoustics Modeling from Experiments (FRAME) technique. A method is developed to generate a realistic flight trajectory from a limited set of waypoints and is used to calculate the quasi-static operating condition and corresponding acoustic sphere for the vehicle throughout the maneuver. By using a previously computed database of acoustic spheres, the acoustic impact of proposed helicopter operations can be rapidly predicted for use in mission-planning. The resulting FRAME-QS model is applied to near-horizon noise measurements collected for the Bell 430 helicopter undergoing transient pitch up and roll maneuvers, with good agreement between the measured data and the FRAME-QS model.
NASA Technical Reports Server (NTRS)
Bose, Deepak
2012-01-01
The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above
Online Islamic Organizations and Measuring Web Effectiveness
2004-12-01
Internet Research 13 (2003) : 17-26. Retrived from ProQuest online database on 15 May 2004. Lee, Jae-Kwan. “A model for monitoring public sector...Web site strategy.” Internet Research : Electronic Networking Applications and Policy 13 (2003) : 259-266. Retrieved from Emerad online database on
Advanced aviation environmental modeling tools to inform policymakers
DOT National Transportation Integrated Search
2012-08-19
Aviation environmental models which conform to international guidance have advanced : over the past several decades. Enhancements to algorithms and databases have increasingly : shown these models to compare well with gold standard measured data. The...
Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A
2014-01-01
We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.
Measurements of near-IR water vapor absorption at high pressure and temperature
NASA Astrophysics Data System (ADS)
Rieker, G. B.; Liu, X.; Li, H.; Jeffries, J. B.; Hanson, R. K.
2007-03-01
Tunable diode lasers (TDLs) are used to measure high resolution (0.1 cm-1), near-infrared (NIR) water vapor absorption spectra at 700 K and pressures up to 30 atm within a high-pressure and -temperature optical cell in a high-uniformity tube furnace. Both direct absorption and wavelength modulation with second harmonic detection (WMS-2f) spectra are obtained for 6 cm-1 regions near 7204 cm-1 and 7435 cm-1. Direct absorption measurements at 700 K and 10 atm are compared with simulations using spectral parameters from HITRAN and a hybrid database combining HITRAN with measured spectral constants for transitions in the two target spectral regions. The hybrid database reduces RMS error between the simulation and the measurements by 45% for the 7204 cm-1 region and 28% for the 7435 cm-1 region. At pressures above 10 atm, the breakdown of the impact approximation inherent to the Lorentzian line shape model becomes apparent in the direct absorption spectra, and measured results are in agreement with model results and trends at elevated temperatures reported in the literature. The wavelength-modulation spectra are shown to be less affected by the breakdown of the impact approximation and measurements agree well with the hybrid database predictions to higher pressures (30 atm).
Data Mining the Ogle-II I-band Database for Eclipsing Binary Stars
NASA Astrophysics Data System (ADS)
Ciocca, M.
2013-08-01
The OGLE I-band database is a searchable database of quality photometric data available to the public. During Phase 2 of the experiment, known as "OGLE-II", I-band observations were made over a period of approximately 1,000 days, resulting in over 1010 measurements of more than 40 million stars. This was accomplished by using a filter with a passband near the standard Cousins Ic. The database of these observations is fully searchable using the mysql database engine, and provides the magnitude measurements and their uncertainties. In this work, a program of data mining the OGLE I-band database was performed, resulting in the discovery of 42 previously unreported eclipsing binaries. Using the software package Peranso (Vanmuster 2011) to analyze the light curves obtained from OGLE-II, the eclipsing types, the epochs and the periods of these eclipsing variables were determined, to one part in 106. A preliminary attempt to model the physical parameters of these binaries was also performed, using the Binary Maker 3 software (Bradstreet and Steelman 2004).
SSME environment database development
NASA Technical Reports Server (NTRS)
Reardon, John
1987-01-01
The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.
Pavelko, Michael T.
2010-01-01
The water-level database for the Death Valley regional groundwater flow system in Nevada and California was updated. The database includes more than 54,000 water levels collected from 1907 to 2007, from more than 1,800 wells. Water levels were assigned a primary flag and multiple secondary flags that describe hydrologic conditions and trends at the time of the measurement and identify pertinent information about the well or water-level measurement. The flags provide a subjective measure of the relative accuracy of the measurements and are used to identify which water levels are appropriate for calculating head observations in a regional transient groundwater flow model. Included in the report appendix are all water-level data and their flags, selected well data, and an interactive spreadsheet for viewing hydrographs and well locations.
Evaluation of Galactic Cosmic Ray Models
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Heiblim, Samuel; Malott, Christopher
2009-01-01
Models of the galactic cosmic ray spectra have been tested by comparing their predictions to an evaluated database containing more than 380 measured cosmic ray spectra extending from 1960 to the present.
NASA Astrophysics Data System (ADS)
Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno
2013-09-01
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).
A Database of Supercooled Large Droplet Ice Accretions [Supplement
NASA Technical Reports Server (NTRS)
VanZante, Judith Foss
2007-01-01
A unique, publicly available database regarding supercooled large droplet (SLD) ice accretions has been developed in NASA Glenn's Icing Research Tunnel. Identical cloud and flight conditions were generated for five different airfoil models. The models chosen represent a variety of aircraft types from the horizontal stabilizer of a large transport aircraft to the wings of regional, business, and general aviation aircraft. In addition to the standard documentation methods of 2D ice shape tracing and imagery, ice mass measurements were also taken. This database will also be used to validate and verify the extension of the ice accretion code, LEWICE, into the SLD realm.
A Database of Supercooled Large Droplet Ice Accretions
NASA Technical Reports Server (NTRS)
VanZante, Judith Foss
2007-01-01
A unique, publicly available database regarding supercooled large droplet ice accretions has been developed in NASA Glenn's Icing Research Tunnel. Identical cloud and flight conditions were generated for five different airfoil models. The models chosen represent a variety of aircraft types from the horizontal stabilizer of a large trans-port aircraft to the wings of regional, business, and general aviation aircraft. In addition to the standard documentation methods of 2D ice shape tracing and imagery, ice mass measurements were also taken. This database will also be used to validate and verify the extension of the ice accretion code, LEWICE, into the SLD realm.
Using STOQS and stoqstoolbox for in situ Measurement Data Access in Matlab
NASA Astrophysics Data System (ADS)
López-Castejón, F.; Schlining, B.; McCann, M. P.
2012-12-01
This poster presents the stoqstoolbox, an extension to Matlab that simplifies the loading of in situ measurement data directly from STOQS databases. STOQS (Spatial Temporal Oceanographic Query System) is a geospatial database tool designed to provide efficient access to data following the CF-NetCDF Discrete Samples Geometries convention. Data are loaded from CF-NetCDF files into a STOQS database where indexes are created on depth, spatial coordinates and other parameters, e.g. platform type. STOQS provides consistent, simple and efficient methods to query for data. For example, we can request all measurements with a standard_name of sea_water_temperature between two times and from between two depths. Data access is simpler because the data are retrieved by parameter irrespective of platform or mission file names. Access is more efficient because data are retrieved via the index on depth and only the requested data are retrieved from the database and transferred into the Matlab workspace. Applications in the stoqstoolbox query the STOQS database via an HTTP REST application programming interface; they follow the Data Access Object pattern, enabling highly customizable query construction. Data are loaded into Matlab structures that clearly indicate latitude, longitude, depth, measurement data value, and platform name. The stoqstoolbox is designed to be used in concert with other tools, such as nctoolbox, which can load data from any OPeNDAP data source. With these two toolboxes a user can easily work with in situ and other gridded data, such as from numerical models and remote sensing platforms. In order to show the capability of stoqstoolbox we will show an example of model validation using data collected during the May-June 2012 field experiment conducted by the Monterey Bay Aquarium Research Institute (MBARI) in Monterey Bay, California. The data are available from the STOQS server at http://odss.mbari.org/canon/stoqs_may2012/query/. Over 14 million data points of 18 parameters from 6 platforms measured over a 3-week period are available on this server. The model used for comparison is the Regional Ocean Modeling System developed by Jet Propulsion Laboratory for the Monterey Bay. The model output are loaded into Matlab using nctoolbox from the JPL server at http://ourocean.jpl.nasa.gov:8080/thredds/dodsC/MBNowcast. Model validation with in situ measurements can be difficult because of different file formats and because data may be spread across individual data systems for each platform. With stoqstoolbox the researcher must know only the URL of the STOQS server and the OPeNDAP URL of the model output. With selected depth and time constraints a user's Matlab program searches for all in situ measurements available for the same time, depth and variable of the model. STOQS and stoqstoolbox are open source software projects supported by MBARI and the David and Lucile Packard foundation. For more information please see http://code.google.com/p/stoqs.
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
76 FR 20438 - Proposed Model Performance Measures for State Traffic Records Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... what data elements are critical. States should take advantage of these decision-making opportunities to... single database. Error means the recorded value for some data element of interest is incorrect. Error... into the database) and the number of missing (blank) data elements in the records that are in a...
Researchers in the National Exposure Research Laboratory (NERL) have performed a number of large human exposure measurement studies during the past decade. It is the goal of the NERL to make the data available to other researchers for analysis in order to further the scientific ...
Recent advances on terrain database correlation testing
NASA Astrophysics Data System (ADS)
Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art
1998-08-01
Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
Airframe Noise Sub-Component Definition and Model
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Sen, Rahul; Hardy, Bruce; Yamamoto, Kingo; Guo, Yue-Ping; Miller, Gregory
2004-01-01
Both in-house, and jointly with NASA under the Advanced Subsonic Transport (AST) program, Boeing Commerical Aircraft Company (BCA) had begun work on systematically identifying specific components of noise responsible for total airframe noise generation and applying the knowledge gained towards the creation of a model for airframe noise prediction. This report documents the continuation of the collection of database from model-scale and full-scale airframe noise measurements to compliment the earlier existing databases, the development of the subcomponent models and the generation of a new empirical prediction code. The airframe subcomponent data includes measurements from aircraft ranging in size from a Boeing 737 to aircraft larger than a Boeing 747 aircraft. These results provide the continuity to evaluate the technology developed under the AST program consistent with the guidelines set forth in NASA CR-198298.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Teixeira, Kristina J.; DeLucia, Evan H.; Duval, Benjamin D.
2015-10-29
To advance understanding of C dynamics of forests globally, we compiled a new database, the Forest C database (ForC-db), which contains data on ground-based measurements of ecosystem-level C stocks and annual fluxes along with disturbance history. This database currently contains 18,791 records from 2009 sites, making it the largest and most comprehensive database of C stocks and flows in forest ecosystems globally. The tropical component of the database will be published in conjunction with a manuscript that is currently under review (Anderson-Teixeira et al., in review). Database development continues, and we hope to maintain a dynamic instance of the entiremore » (global) database.« less
Airport take-off noise assessment aimed at identify responsible aircraft classes.
Sanchez-Perez, Luis A; Sanchez-Fernandez, Luis P; Shaout, Adnan; Suarez-Guerra, Sergio
2016-01-15
Assessment of aircraft noise is an important task of nowadays airports in order to fight environmental noise pollution given the recent discoveries on the exposure negative effects on human health. Noise monitoring and estimation around airports mostly use aircraft noise signals only for computing statistical indicators and depends on additional data sources so as to determine required inputs such as the aircraft class responsible for noise pollution. In this sense, the noise monitoring and estimation systems have been tried to improve by creating methods for obtaining more information from aircraft noise signals, especially real-time aircraft class recognition. Consequently, this paper proposes a multilayer neural-fuzzy model for aircraft class recognition based on take-off noise signal segmentation. It uses a fuzzy inference system to build a final response for each class p based on the aggregation of K parallel neural networks outputs Op(k) with respect to Linear Predictive Coding (LPC) features extracted from K adjacent signal segments. Based on extensive experiments over two databases with real-time take-off noise measurements, the proposed model performs better than other methods in literature, particularly when aircraft classes are strongly correlated to each other. A new strictly cross-checked database is introduced including more complex classes and real-time take-off noise measurements from modern aircrafts. The new model is at least 5% more accurate with respect to previous database and successfully classifies 87% of measurements in the new database. Copyright © 2015 Elsevier B.V. All rights reserved.
Empirical cost models for estimating power and energy consumption in database servers
NASA Astrophysics Data System (ADS)
Valdivia Garcia, Harold Dwight
The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.
Measurement of Global Radiation using Photovoltaic Panels
NASA Astrophysics Data System (ADS)
Veroustraete, Frank; Bronders, Jan; Lefevre, Filip; Mensink, Clemens
2014-05-01
The Vito Unit - Environmental and Spatial Aspects (RMA) - for many of its models makes use of global solar radiation. From this viewpoint and also from the notion that this variable is seldom measured or available at the local scale and at high multi-temporal frequencies, it can be stated that many models are fed with low quality estimates of global solar radiation at the local to regional scales. A project was initiated called SUNSPIDER with the following objective. To make use of photovoltaic solar panels to measure solar radiation at the highest spatio-temporal resolution, from the local to the regional scales and from minutes to years. To integrate the measured solar fields in different application fields like, plant systems and agriculture, agro-meteorology and hydrology and last but not least solar energy applications. In Belgium about 250.000 PV installations have been built leading to about 6% electric power supply from photovoltaics on a yearly basis. Last year in June, the supply reached a peak of more than 20% of the total power input on the Belgian grid. A database of Belgian residential solar panel sites will be compiled. The database will serve as an input to an inverted PV model to be able to perform radiation calculations specifically for each of the validated panel sites based on minutely logged power data. Data acquisition for these sites will start each time a site is validated and hence imported in the database. Keywords: Photovoltaic Panels; PV modelling; Global Radiation.
Storing Data from Qweak--A Precision Measurement of the Proton's Weak Charge
NASA Astrophysics Data System (ADS)
Pote, Timothy
2008-10-01
The Qweak experiment will perform a precision measurement of the proton's parity violating weak charge at low Q-squared. The experiment will do so by measuring the asymmetry in parity-violating electron scattering. The proton's weak charge is directly related to the value of the weak mixing angle--a fundamental quantity in the Standard Model. The Standard Model makes a firm prediction for the value of the weak mixing angle and thus Qweak may provide insight into shortcomings in the SM. The Qweak experiment will run at Thomas Jefferson National Accelerator Facility in Newport News, VA. A database was designed to hold data directly related to the measurement of the proton's weak charge such as detector and beam monitor yield, asymmetry, and error as well as control structures such as the voltage across photomultiplier tubes and the temperature of the liquid hydrogen target. In order to test the database for speed and stability, it was filled with fake data that mimicked the data that Qweak is expected to collect. I will give a brief overview of the Qweak experiment and database design, and present data collected during these tests.
Anthropometry of Brazilian Air Force pilots.
da Silva, Gilvan V; Halpern, Manny; Gordon, Claire C
2017-10-01
Anthropometric data are essential for the design of military equipment including sizing of aircraft cockpits and personal gear. Currently, there are no anthropometric databases specific to Brazilian military personnel. The aim of this study was to create a Brazilian anthropometric database of Air Force pilots. The methods, protocols, descriptions, definitions, landmarks, tools and measurements procedures followed the instructions outlined in Measurer's Handbook: US Army and Marine Corps Anthropometric Surveys, 2010-2011 - NATICK/TR-11/017. The participants were measured countrywide, in all five Brazilian Geographical Regions. Thirty-nine anthropometric measurements related to cockpit design were selected. The results of 2133 males and 206 females aged 16-52 years constitute a set of basic data for cockpit design, space arrangement issues and adjustments, protective gear and equipment design, as well as for digital human modelling. Another important implication is that this study can be considered a starting point for reducing gender bias in women's career as pilots. Practitioner Summary: This paper describes the first large-scale anthropometric survey of the Brazilian Air Force pilots and the development of the related database. This study provides critical data for improving aircraft cockpit design for ergonomics and comprehensive pilot accommodation, protective gear and uniform design, as well as digital human modelling.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication.
Katagiri, Keita; Sato, Koya; Fujii, Takeo
2018-04-12
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication †
Katagiri, Keita; Fujii, Takeo
2018-01-01
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174
Recent Advances in the GLIMS Glacier Database
NASA Astrophysics Data System (ADS)
Raup, Bruce; Cogley, Graham; Zemp, Michael; Glaus, Ladina
2017-04-01
Glaciers are shrinking almost without exception. Glacier losses have impacts on local water availability and hazards, and contribute to sea level rise. To understand these impacts and the processes behind them, it is crucial to monitor glaciers through time by mapping their areal extent, changes in volume, elevation distribution, snow lines, ice flow velocities, and changes to associated water bodies. The glacier database of the Global Land Ice Measurements from Space (GLIMS) initiative is the only multi-temporal glacier database capable of tracking all these glacier measurements and providing them to the scientific community and broader public. Here we present recent results in 1) expansion of the geographic and temporal coverage of the GLIMS Glacier Database by drawing on the Randolph Glacier Inventory (RGI) and other new data sets; 2) improved tools for visualizing and downloading GLIMS data in a choice of formats and data models; and 3) a new data model for handling multiple glacier records through time while avoiding double-counting of glacier number or area. The result of this work is a more complete glacier data repository that shows not only the current state of glaciers on Earth, but how they have changed in recent decades. The database is useful for tracking changes in water resources, hazards, and mass budgets of the world's glaciers.
Liao, Quan; Yao, Jianhua; Yuan, Shengang
2007-05-01
The study of prediction of toxicity is very important and necessary because measurement of toxicity is typically time-consuming and expensive. In this paper, Recursive Partitioning (RP) method was used to select descriptors. RP and Support Vector Machines (SVM) were used to construct structure-toxicity relationship models, RP model and SVM model, respectively. The performances of the two models are different. The prediction accuracies of the RP model are 80.2% for mutagenic compounds in MDL's toxicity database, 83.4% for compounds in CMC and 84.9% for agrochemicals in in-house database respectively. Those of SVM model are 81.4%, 87.0% and 87.3% respectively.
Andersen, Claus E; Raaschou-Nielsen, Ole; Andersen, Helle Primdal; Lind, Morten; Gravesen, Peter; Thomsen, Birthe L; Ulbak, Kaare
2007-01-01
A linear regression model has been developed for the prediction of indoor (222)Rn in Danish houses. The model provides proxy radon concentrations for about 21,000 houses in a Danish case-control study on the possible association between residential radon and childhood cancer (primarily leukaemia). The model was calibrated against radon measurements in 3116 houses. An independent dataset with 788 house measurements was used for model performance assessment. The model includes nine explanatory variables, of which the most important ones are house type and geology. All explanatory variables are available from central databases. The model was fitted to log-transformed radon concentrations and it has an R(2) of 40%. The uncertainty associated with individual predictions of (untransformed) radon concentrations is about a factor of 2.0 (one standard deviation). The comparison with the independent test data shows that the model makes sound predictions and that errors of radon predictions are only weakly correlated with the estimates themselves (R(2) = 10%).
NASA Astrophysics Data System (ADS)
Mo, Yunjeong
The purpose of this research is to support the development of an intelligent Decision Support System (DSS) by integrating quantitative information with expert knowledge in order to facilitate effective retrofit decision-making. To achieve this goal, the Energy Retrofit Decision Process Framework is analyzed. Expert system shell software, a retrofit measure cost database, and energy simulation software are needed for developing the DSS; Exsys Corvid, the NREM database and BEopt were chosen for implementing an integration model. This integration model demonstrates the holistic function of a residential energy retrofit system for existing homes, by providing a prioritized list of retrofit measures with cost information, energy simulation and expert advice. The users, such as homeowners and energy auditors, can acquire all of the necessary retrofit information from this unified system without having to explore several separate systems. The integration model plays the role of a prototype for the finalized intelligent decision support system. It implements all of the necessary functions for the finalized DSS, including integration of the database, energy simulation and expert knowledge.
NASA Astrophysics Data System (ADS)
Lebedeva, Liudmila; Semenova, Olga
2013-04-01
One of widely claimed problems in modern modelling hydrology is lack of available information to investigate hydrological processes and improve their representation in the models. In spite of this, one hardly might confidently say that existing "traditional" data sources have been already fully analyzed and made use of. There existed the network of research watersheds in USSR called water-balance stations where comprehensive and extensive hydrometeorological measurements were conducted according to more or less single program during the last 40-60 years. The program (where not ceased) includes observations of discharges in several, often nested and homogeneous, small watersheds, meteorological elements, evaporation, soil temperature and moisture, snow depths, etc. The network covered different climatic and landscape zones and was established in the middle of the last century with the aim of investigation of the runoff formation in different conditions. Until recently the long-term observational data accompanied by descriptions and maps had existed only in hard copies. It partly explains why these datasets are not enough exploited yet and very rarely or even never were used for the purposes of hydrological modelling although they seem to be much more promising than implementation of the completely new measuring techniques not detracting from its importance. The goal of the presented work is development of a database of observational data and supportive materials from small research watersheds across the territory of the former Soviet Union. The first version of the database will include the following information for 12 water-balance stations across Russia, Ukraine, Kazahstan and Turkmenistan: daily values of discharges (one or several watersheds), air temperature, humidity, precipitation (one or several gauges), soil and snow state variables, soil and snow evaporation. The stations will cover desert and semi desert, steppe and forest steppe, forest, permafrost and mountainous zones. Supportive material will include maps of watershed boundaries and location of observational sites. Text descriptions of the data, measuring techniques and hydrometeorological conditions related to each of the water-balance station will accompany the datasets. The database is supposed to be expanded with time in number of the stations (by 20) and available data series for each of them. It will be uploaded to the internet with open access to everyone interested in. Such a database allows one to test hydrological models and separate modules for their adequacy and workability in different conditions and can serve as a base for models comparison and evaluation. Special profit of the database will gain models that don't rely on calibration but on the adequate process representation and use of the observable parameters. One of such models, process-based Hydrograph model, will be tested against the data from every watershed from the developed database. The aim of the Hydrograph model application to the as many as possible number of research data-rich watersheds in different climatic zones is both amending the algorithms and creation and adjustment of the model parameters that allow using the model across the geographic spectrum.
Hydroacoustic propagation grids for the CTBT knowledge databaes BBN technical memorandum W1303
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Angell
1998-05-01
The Hydroacoustic Coverage Assessment Model (HydroCAM) has been used to develop components of the hydroacoustic knowledge database required by operational monitoring systems, particularly the US National Data Center (NDC). The database, which consists of travel time, amplitude correction and travel time standard deviation grids, is planned to support source location, discrimination and estimation functions of the monitoring network. The grids will also be used under the current BBN subcontract to support an analysis of the performance of the International Monitoring System (IMS) and national sensor systems. This report describes the format and contents of the hydroacoustic knowledgebase grids, and themore » procedures and model parameters used to generate these grids. Comparisons between the knowledge grids, measured data and other modeled results are presented to illustrate the strengths and weaknesses of the current approach. A recommended approach for augmenting the knowledge database with a database of expected spectral/waveform characteristics is provided in the final section of the report.« less
NASA Astrophysics Data System (ADS)
Heffernan, Julieanne; Biedermann, Eric; Mayes, Alexander; Livings, Richard; Jauriqui, Leanne; Goodlet, Brent; Aldrin, John C.; Mazdiyasni, Siamack
2018-04-01
Process Compensated Resonant Testing (PCRT) is a full-body nondestructive testing (NDT) method that measures the resonance frequencies of a part and correlates them to the part's material and/or damage state. PCRT testing is used in the automotive, aerospace, and power generation industries via automated PASS/FAIL inspections to distinguish parts with nominal process variation from those with the defect(s) of interest. Traditional PCRT tests are created through the statistical analysis of populations of "good" and "bad" parts. However, gathering a statistically significant number of parts can be costly and time-consuming, and the availability of defective parts may be limited. This work uses virtual databases of good and bad parts to create two targeted PCRT inspections for single crystal (SX) nickel-based superalloy turbine blades. Using finite element (FE) models, populations were modeled to include variations in geometric dimensions, material properties, crystallographic orientation, and creep damage. Model results were verified by comparing the frequency variation in the modeled populations with the measured frequency variations of several physical blade populations. Additionally, creep modeling results were verified through the experimental evaluation of coupon geometries. A virtual database of resonance spectra was created from the model data. The virtual database was used to create PCRT inspections to detect crystallographic defects and creep strain. Quantification of creep strain values using the PCRT inspection results was also demonstrated.
NASA Astrophysics Data System (ADS)
Löbling, L.
2017-03-01
Aluminum (Al) nucleosynthesis takes place during the asymptotic-giant-branch (AGB) phase of stellar evolution. Al abundance determinations in hot white dwarf stars provide constraints to understand this process. Precise abundance measurements require advanced non-local thermodynamic stellar-atmosphere models and reliable atomic data. In the framework of the German Astrophysical Virtual Observatory (GAVO), the Tübingen Model-Atom Database (TMAD) contains ready-to- use model atoms for elements from hydrogen to barium. A revised, elaborated Al model atom has recently been added. We present preliminary stellar-atmosphere models and emergent Al line spectra for the hot white dwarfs G191-B2B and RE 0503-289.
How Accurate Is A Hydraulic Model? | Science Inventory | US ...
Symposium paper Network hydraulic models are widely used, but their overall accuracy is often unknown. Models are developed to give utilities better insight into system hydraulic behavior, and increasingly the ability to predict the fate and transport of chemicals. Without an accessible and consistent means of validating a given model against the system it is meant to represent, the value of those supposed benefits should be questioned. Supervisory Control And Data Acquisition (SCADA) databases, though ubiquitous, are underused data sources for this type of task. Integrating a network model with a measurement database would offer professionals the ability to assess the model’s assumptions in an automated fashion by leveraging enormous amounts of data.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
NASA Astrophysics Data System (ADS)
Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald
2017-04-01
West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the national meteorological services.
Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette; Kone, Mouhamadou; Kőrösi, Ádám; Krauss, Jochen; Kumar, Ajith; Kumar, Raman; Kurz, David J; Kutt, Alex S; Lachat, Thibault; Lantschner, Victoria; Lara, Francisco; Lasky, Jesse R; Latta, Steven C; Laurance, William F; Lavelle, Patrick; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Lehouck, Valérie; Lencinas, María V; Lentini, Pia E; Letcher, Susan G; Li, Qi; Litchwark, Simon A; Littlewood, Nick A; Liu, Yunhui; Lo-Man-Hung, Nancy; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Luskin, Matthew S; MacSwiney G, M Cristina; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Malone, Louise A; Malonza, Patrick K; Malumbres-Olarte, Jagoba; Mandujano, Salvador; Måren, Inger E; Marin-Spiotta, Erika; Marsh, Charles J; Marshall, E J P; Martínez, Eliana; Martínez Pastur, Guillermo; Moreno Mateos, David; Mayfield, Margaret M; Mazimpaka, Vicente; McCarthy, Jennifer L; McCarthy, Kyle P; McFrederick, Quinn S; McNamara, Sean; Medina, Nagore G; Medina, Rafael; Mena, Jose L; Mico, Estefania; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Miranda-Esquivel, Daniel R; Moir, Melinda L; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Mudri-Stojnic, Sonja; Munira, A Nur; Muoñz-Alonso, Antonio; Munyekenye, B F; Naidoo, Robin; Naithani, A; Nakagawa, Michiko; Nakamura, Akihiro; Nakashima, Yoshihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Navarro-Iriarte, Luis; Ndang'ang'a, Paul K; Neuschulz, Eike L; Ngai, Jacqueline T; Nicolas, Violaine; Nilsson, Sven G; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Norton, David A; Nöske, Nicole M; Nowakowski, A Justin; Numa, Catherine; O'Dea, Niall; O'Farrell, Patrick J; Oduro, William; Oertli, Sabine; Ofori-Boateng, Caleb; Oke, Christopher Omamoke; Oostra, Vicencio; Osgathorpe, Lynne M; Otavo, Samuel Eduardo; Page, Navendu V; Paritsis, Juan; Parra-H, Alejandro; Parry, Luke; Pe'er, Guy; Pearman, Peter B; Pelegrin, Nicolás; Pélissier, Raphaël; Peres, Carlos A; Peri, Pablo L; Persson, Anna S; Petanidou, Theodora; Peters, Marcell K; Pethiyagoda, Rohan S; Phalan, Ben; Philips, T Keith; Pillsbury, Finn C; Pincheira-Ulbrich, Jimmy; Pineda, Eduardo; Pino, Joan; Pizarro-Araya, Jaime; Plumptre, A J; Poggio, Santiago L; Politi, Natalia; Pons, Pere; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Rader, Romina; Ramesh, B R; Ramirez-Pinilla, Martha P; Ranganathan, Jai; Rasmussen, Claus; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Rey Benayas, José M; Rey-Velasco, Juan Carlos; Reynolds, Chevonne; Ribeiro, Danilo Bandini; Richards, Miriam H; Richardson, Barbara A; Richardson, Michael J; Ríos, Rodrigo Macip; Robinson, Richard; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rös, Matthias; Rosselli, Loreta; Rossiter, Stephen J; Roth, Dana S; Roulston, T'ai H; Rousseau, Laurent; Rubio, André V; Ruel, Jean-Claude; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Sam, Katerina; Samnegård, Ulrika; Santana, Joana; Santos, Xavier; Savage, Jade; Schellhorn, Nancy A; Schilthuizen, Menno; Schmiedel, Ute; Schmitt, Christine B; Schon, Nicole L; Schüepp, Christof; Schumann, Katharina; Schweiger, Oliver; Scott, Dawn M; Scott, Kenneth A; Sedlock, Jodi L; Seefeldt, Steven S; Shahabuddin, Ghazala; Shannon, Graeme; Sheil, Douglas; Sheldon, Frederick H; Shochat, Eyal; Siebert, Stefan J; Silva, Fernando A B; Simonetti, Javier A; Slade, Eleanor M; Smith, Jo; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Soto Quiroga, Grimaldo; St-Laurent, Martin-Hugues; Starzomski, Brian M; Stefanescu, Constanti; Steffan-Dewenter, Ingolf; Stouffer, Philip C; Stout, Jane C; Strauch, Ayron M; Struebig, Matthew J; Su, Zhimin; Suarez-Rubio, Marcela; Sugiura, Shinji; Summerville, Keith S; Sung, Yik-Hei; Sutrisno, Hari; Svenning, Jens-Christian; Teder, Tiit; Threlfall, Caragh G; Tiitsaar, Anu; Todd, Jacqui H; Tonietto, Rebecca K; Torre, Ignasi; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Uehara-Prado, Marcio; Urbina-Cardona, Nicolas; Vallan, Denis; Vanbergen, Adam J; Vasconcelos, Heraldo L; Vassilev, Kiril; Verboven, Hans A F; Verdasca, Maria João; Verdú, José R; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Virgilio, Massimiliano; Vu, Lien Van; Waite, Edward M; Walker, Tony R; Wang, Hua-Feng; Wang, Yanping; Watling, James I; Weller, Britta; Wells, Konstans; Westphal, Catrin; Wiafe, Edward D; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Wolters, Volkmar; Woodcock, Ben A; Wu, Jihua; Wunderle, Joseph M; Yamaura, Yuichi; Yoshikura, Satoko; Yu, Douglas W; Zaitsev, Andrey S; Zeidler, Juliane; Zou, Fasheng; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy
2017-01-01
The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.
Gorman Ng, Melanie; Semple, Sean; Cherrie, John W; Christopher, Yvette; Northage, Christine; Tielemans, Erik; Veroughstraete, Violaine; Van Tongeren, Martie
2012-11-01
Occupational inadvertent ingestion exposure is ingestion exposure due to contact between the mouth and contaminated hands or objects. Although individuals are typically oblivious to their exposure by this route, it is a potentially significant source of occupational exposure for some substances. Due to the continual flux of saliva through the oral cavity and the non-specificity of biological monitoring to routes of exposure, direct measurement of exposure by the inadvertent ingestion route is challenging; predictive models may be required to assess exposure. The work described in this manuscript has been carried out as part of a project to develop a predictive model for estimating inadvertent ingestion exposure in the workplace. As inadvertent ingestion exposure mainly arises from hand-to-mouth contact, it is closely linked to dermal exposure. We present a new integrated conceptual model for dermal and inadvertent ingestion exposure that should help to increase our understanding of ingestion exposure and our ability to simultaneously estimate exposure by the dermal and ingestion routes. The conceptual model consists of eight compartments (source, air, surface contaminant layer, outer clothing contaminant layer, inner clothing contaminant layer, hands and arms layer, perioral layer, and oral cavity) and nine mass transport processes (emission, deposition, resuspension or evaporation, transfer, removal, redistribution, decontamination, penetration and/or permeation, and swallowing) that describe event-based movement of substances between compartments (e.g. emission, deposition, etc.). This conceptual model is intended to guide the development of predictive exposure models that estimate exposure from both the dermal and the inadvertent ingestion pathways. For exposure by these pathways the efficiency of transfer of materials between compartments (for example from surfaces to hands, or from hands to the mouth) are important determinants of exposure. A database of transfer efficiency data relevant for dermal and inadvertent ingestion exposure was developed, containing 534 empirically measured transfer efficiencies measured between 1980 and 2010 and reported in the peer-reviewed and grey literature. The majority of the reported transfer efficiencies (84%) relate to transfer between surfaces and hands, but the database also includes efficiencies for other transfer scenarios, including surface-to-glove, hand-to-mouth, and skin-to-skin. While the conceptual model can provide a framework for a predictive exposure assessment model, the database provides detailed information on transfer efficiencies between the various compartments. Together, the conceptual model and the database provide a basis for the development of a quantitative tool to estimate inadvertent ingestion exposure in the workplace.
Assessing the quality of life history information in publicly available databases.
Thorson, James T; Cope, Jason M; Patrick, Wesley S
2014-01-01
Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.
Review and assessment of turbulence models for hypersonic flows
NASA Astrophysics Data System (ADS)
Roy, Christopher J.; Blottner, Frederick G.
2006-10-01
Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.
Tritium environmental transport studies at TFTR
NASA Astrophysics Data System (ADS)
Ritter, P. D.; Dolan, T. J.; Longhurst, G. R.
1993-06-01
Environmental tritium concentrations will be measured near the Tokamak Fusion Test Reactor (TFTR) to help validate dynamic models of tritium transport in the environment. For model validation the database must contain sequential measurements of tritium concentrations in key environmental compartments. Since complete containment of tritium is an operational goal, the supplementary monitoring program should be able to glean useful data from an unscheduled acute release. Portable air samplers will be used to take samples automatically every 4 hours for a week after an acute release, thus obtaining the time resolution needed for code validation. Samples of soil, vegetation, and foodstuffs will be gathered daily at the same locations as the active air monitors. The database may help validate the plant/soil/air part of tritium transport models and enhance environmental tritium transport understanding for the International Thermonuclear Experimental Reactor (ITER).
A Community Data Model for Hydrologic Observations
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.
2006-12-01
The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
Nuclear Data Matters - The obvious case of a bad mixing ratio for 58Co
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, R. D.; Nesaraja, Caroline D.; Mattoon, Caleb
We present results of modeled cross sections for neutron- and proton-induced reactions leading to the final product nucleus 58Co. In each case the gamma-cascade branching ratios given in the ENSDF database circa 2014 predict modeled nuclear cross sections leading to the ground and first excited metastable state that are incompatible with measured cross sections found in the NNDC experimental cross section database EXFOR. We show that exploring the uncertainty in the mixing ratio used to calculate the gamma-cascade branching ratios for the 53.15 keV 2 nd excited state leads to changes in the predicted partial cross sections by amounts thatmore » give good agreement with measured data.« less
Aerodynamic Tests of the Space Launch System for Database Development
NASA Technical Reports Server (NTRS)
Pritchett, Victor E.; Mayle, Melody N.; Blevins, John A.; Crosby, William A.; Purinton, David C.
2014-01-01
The Aerosciences Branch (EV33) at the George C. Marshall Space Flight Center (MSFC) has been responsible for a series of wind tunnel tests on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) vehicles. The primary purpose of these tests was to obtain aerodynamic data during the ascent phase and establish databases that can be used by the Guidance, Navigation, and Mission Analysis Branch (EV42) for trajectory simulations. The paper describes the test particulars regarding models and measurements and the facilities used, as well as database preparations.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.
Bayesian Calibration of Thermodynamic Databases and the Role of Kinetics
NASA Astrophysics Data System (ADS)
Wolf, A. S.; Ghiorso, M. S.
2017-12-01
Self-consistent thermodynamic databases of geologically relevant materials (like Berman, 1988; Holland and Powell, 1998, Stixrude & Lithgow-Bertelloni 2011) are crucial for simulating geological processes as well as interpreting rock samples from the field. These databases form the backbone of our understanding of how fluids and rocks interact at extreme planetary conditions. Considerable work is involved in their construction from experimental phase reaction data, as they must self-consistently describe the free energy surfaces (including relative offsets) of potentially hundreds of interacting phases. Standard database calibration methods typically utilize either linear programming or least squares regression. While both produce a viable model, they suffer from strong limitations on the training data (which must be filtered by hand), along with general ignorance of many of the sources of experimental uncertainty. We develop a new method for calibrating high P-T thermodynamic databases for use in geologic applications. The model is designed to handle pure solid endmember and free fluid phases and can be extended to include mixed solid solutions and melt phases. This new calibration effort utilizes Bayesian techniques to obtain optimal parameter values together with a full family of statistically acceptable models, summarized by the posterior. Unlike previous efforts, the Bayesian Logistic Uncertain Reaction (BLUR) model directly accounts for both measurement uncertainties and disequilibrium effects, by employing a kinetic reaction model whose parameters are empirically determined from the experiments themselves. Thus, along with the equilibrium free energy surfaces, we also provide rough estimates of the activation energies, entropies, and volumes for each reaction. As a first application, we demonstrate this new method on the three-phase aluminosilicate system, illustrating how it can produce superior estimates of the phase boundaries by incorporating constraints from all available data, while automatically handling variable data quality due to a combination of measurement errors and kinetic effects.
1992-09-01
PI) 297 S S S S 15 Polyamideimide (PAI) 297 S S S S 14 Polyamide 6:6 (PA 6:6) 297 S S S S 35 Perfluoroakloxyethylene ( PFA ) 297 S S S S 42 Phenol...Procedures for the Measurement of Vapor Sorption Followed by Desorption and Comparisons with Polymer Cohesion Parameter and Polymer Coil Expansion Values
NASA Astrophysics Data System (ADS)
Stebel, Kerstin; Prata, Fred; Theys, Nicolas; Tampellini, Lucia; Kamstra, Martijn; Zehner, Claus
2014-05-01
Over the last few years there has been a recognition of the utility of satellite measurements to identify and track volcanic emissions that present a natural hazard to human populations. Mitigation of the volcanic hazard to life and the environment requires understanding of the properties of volcanic emissions, identifying the hazard in near real-time and being able to provide timely and accurate forecasts to affected areas. Amongst the many ways to measure volcanic emissions, satellite remote sensing is capable of providing global quantitative retrievals of important microphysical parameters such as ash mass loading, ash particle effective radius, infrared optical depth, SO2 partial and total column abundance, plume altitude, aerosol optical depth and aerosol absorbing index. The eruption of Eyjafjallajökull in April May, 2010 led to increased research and measurement programs to better characterize properties of volcanic ash and the need to establish a data-base in which to store and access these data was confirmed. The European Space Agency (ESA) has recognized the importance of having a quality controlled data-base of satellite retrievals and has funded an activity called Volcanic Ash Strategic Initiative Team VAST (vast.nilu.no) to develop novel remote sensing retrieval schemes and a data-base, initially focused on several recent hazardous volcanic eruptions. In addition, the data-base will host satellite and validation data sets provided from the ESA projects Support to Aviation Control Service SACS (sacs.aeronomie.be) and Study on an end-to-end system for volcanic ash plume monitoring and prediction SMASH. Starting with data for the eruptions of Eyjafjallajökull, Grímsvötn, and Kasatochi, satellite retrievals for Puyhue-Cordon Caulle, Nabro, Merapi, Okmok, Kasatochi and Sarychev Peak will eventually be ingested. Dispersion model simulations are also being included in the data-base. Several atmospheric dispersion models (FLEXPART, SILAM and WRF-Chem) are used in VAST to simulate the dispersion of volcanic ash and SO2 emitted during an eruption. Source terms and dispersion model results will be given. In time, data from conventional in situ sampling instruments, airborne and ground-based remote sensing platforms and other meta-data (bulk ash and gas properties, volcanic setting, volcanic eruption chronologies, potential impacts etc.) will be added. Important applications of the data-base are illustrated related to the ash/aviation problem and to estimating SO2 fluxes from active volcanoes-as a means to diagnose future unrest. The data-base has the potential to provide the natural hazards community with a dynamic atmospheric volcanic hazards map and will be a valuable tool particularly for aviation.
NASA Astrophysics Data System (ADS)
Hitomi Collaboration; Aharonian, Felix; Akamatsu, Hiroki; Akimoto, Fumie; Allen, Steven W.; Angelini, Lorella; Audard, Marc; Awaki, Hisamitsu; Axelsson, Magnus; Bamba, Aya; Bautz, Marshall W.; Blandford, Roger; Brenneman, Laura W.; Brown, Gregory V.; Bulbul, Esra; Cackett, Edward M.; Chernyakova, Maria; Chiao, Meng P.; Coppi, Paolo S.; Costantini, Elisa; de Plaa, Jelle; de Vries, Cor P.; den Herder, Jan-Willem; Done, Chris; Dotani, Tadayasu; Ebisawa, Ken; Eckart, Megan E.; Enoto, Teruaki; Ezoe, Yuichiro; Fabian, Andrew C.; Ferrigno, Carlo; Foster, Adam R.; Fujimoto, Ryuichi; Fukazawa, Yasushi; Furuzawa, Akihiro; Galeazzi, Massimiliano; Gallo, Luigi C.; Gandhi, Poshak; Giustini, Margherita; Goldwurm, Andrea; Gu, Liyi; Guainazzi, Matteo; Haba, Yoshito; Hagino, Kouichi; Hamaguchi, Kenji; Harrus, Ilana M.; Hatsukade, Isamu; Hayashi, Katsuhiro; Hayashi, Takayuki; Hayashida, Kiyoshi; Hell, Natalie; Hiraga, Junko S.; Hornschemeier, Ann; Hoshino, Akio; Hughes, John P.; Ichinohe, Yuto; Iizuka, Ryo; Inoue, Hajime; Inoue, Yoshiyuki; Ishida, Manabu; Ishikawa, Kumi; Ishisaki, Yoshitaka; Iwai, Masachika; Kaastra, Jelle; Kallman, Tim; Kamae, Tsuneyoshi; Kataoka, Jun; Katsuda, Satoru; Kawai, Nobuyuki; Kelley, Richard L.; Kilbourne, Caroline A.; Kitaguchi, Takao; Kitamoto, Shunji; Kitayama, Tetsu; Kohmura, Takayoshi; Kokubun, Motohide; Koyama, Katsuji; Koyama, Shu; Kretschmar, Peter; Krimm, Hans A.; Kubota, Aya; Kunieda, Hideyo; Laurent, Philippe; Lee, Shiu-Hang; Leutenegger, Maurice A.; Limousin, Olivier; Loewenstein, Michael; Long, Knox S.; Lumb, David; Madejski, Greg; Maeda, Yoshitomo; Maier, Daniel; Makishima, Kazuo; Markevitch, Maxim; Matsumoto, Hironori; Matsushita, Kyoko; McCammon, Dan; McNamara, Brian R.; Mehdipour, Missagh; Miller, Eric D.; Miller, Jon M.; Mineshige, Shin; Mitsuda, Kazuhisa; Mitsuishi, Ikuyuki; Miyazawa, Takuya; Mizuno, Tsunefumi; Mori, Hideyuki; Mori, Koji; Mukai, Koji; Murakami, Hiroshi; Mushotzky, Richard F.; Nakagawa, Takao; Nakajima, Hiroshi; Nakamori, Takeshi; Nakashima, Shinya; Nakazawa, Kazuhiro; Nobukawa, Kumiko K.; Nobukawa, Masayoshi; Noda, Hirofumi; Odaka, Hirokazu; Ohashi, Takaya; Ohno, Masanori; Okajima, Takashi; Ota, Naomi; Ozaki, Masanobu; Paerels, Frits; Paltani, Stéphane; Petre, Robert; Pinto, Ciro; Porter, Frederick S.; Pottschmidt, Katja; Reynolds, Christopher S.; Safi-Harb, Samar; Saito, Shinya; Sakai, Kazuhiro; Sasaki, Toru; Sato, Goro; Sato, Kosuke; Sato, Rie; Sawada, Makoto; Schartel, Norbert; Serlemtsos, Peter J.; Seta, Hiromi; Shidatsu, Megumi; Simionescu, Aurora; Smith, Randall K.; Soong, Yang; Stawarz, Łukasz; Sugawara, Yasuharu; Sugita, Satoshi; Szymkowiak, Andrew; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Takei, Yoh; Tamagawa, Toru; Tamura, Takayuki; Tanaka, Takaaki; Tanaka, Yasuo; Tanaka, Yasuyuki T.; Tashiro, Makoto S.; Tawara, Yuzuru; Terada, Yukikatsu; Terashima, Yuichi; Tombesi, Francesco; Tomida, Hiroshi; Tsuboi, Yohko; Tsujimoto, Masahiro; Tsunemi, Hiroshi; Tsuru, Takeshi Go; Uchida, Hiroyuki; Uchiyama, Hideki; Uchiyama, Yasunobu; Ueda, Shutaro; Ueda, Yoshihiro; Uno, Shin'ichiro; Urry, C. Megan; Ursino, Eugenio; Watanabe, Shin; Werner, Norbert; Wilkins, Dan R.; Williams, Brian J.; Yamada, Shinya; Yamaguchi, Hiroya; Yamaoka, Kazutaka; Yamasaki, Noriko Y.; Yamauchi, Makoto; Yamauchi, Shigeo; Yaqoob, Tahir; Yatsu, Yoichi; Yonetoku, Daisuke; Zhuravleva, Irina; Zoghbi, Abderahmen; Raassen, A. J. J.
2018-03-01
The Hitomi Soft X-ray Spectrometer spectrum of the Perseus cluster, with ˜5 eV resolution in the 2-9 keV band, offers an unprecedented benchmark of the atomic modeling and database for hot collisional plasmas. It reveals both successes and challenges of the current atomic data and models. The latest versions of AtomDB/APEC (3.0.8), SPEX (3.03.00), and CHIANTI (8.0) all provide reasonable fits to the broad-band spectrum, and are in close agreement on best-fit temperature, emission measure, and abundances of a few elements such as Ni. For the Fe abundance, the APEC and SPEX measurements differ by 16%, which is 17 times higher than the statistical uncertainty. This is mostly attributed to the differences in adopted collisional excitation and dielectronic recombination rates of the strongest emission lines. We further investigate and compare the sensitivity of the derived physical parameters to the astrophysical source modeling and instrumental effects. The Hitomi results show that accurate atomic data and models are as important as the astrophysical modeling and instrumental calibration aspects. Substantial updates of atomic databases and targeted laboratory measurements are needed to get the current data and models ready for the data from the next Hitomi-level mission.
Heng, Daniel Y C; Xie, Wanling; Regan, Meredith M; Harshman, Lauren C; Bjarnason, Georg A; Vaishampayan, Ulka N; Mackenzie, Mary; Wood, Lori; Donskov, Frede; Tan, Min-Han; Rha, Sun-Young; Agarwal, Neeraj; Kollmannsberger, Christian; Rini, Brian I; Choueiri, Toni K
2014-01-01
Summary Background The International Metastatic Renal-Cell Carcinoma Database Consortium model offers prognostic information for patients with metastatic renal-cell carcinoma. We tested the accuracy of the model in an external population and compared it with other prognostic models. Methods We included patients with metastatic renal-cell carcinoma who were treated with first-line VEGF-targeted treatment at 13 international cancer centres and who were registered in the Consortium’s database but had not contributed to the initial development of the Consortium Database model. The primary endpoint was overall survival. We compared the Database Consortium model with the Cleveland Clinic Foundation (CCF) model, the International Kidney Cancer Working Group (IKCWG) model, the French model, and the Memorial Sloan-Kettering Cancer Center (MSKCC) model by concordance indices and other measures of model fit. Findings Overall, 1028 patients were included in this study, of whom 849 had complete data to assess the Database Consortium model. Median overall survival was 18·8 months (95% 17·6–21·4). The predefined Database Consortium risk factors (anaemia, thrombocytosis, neutrophilia, hypercalcaemia, Karnofsky performance status <80%, and <1 year from diagnosis to treatment) were independent predictors of poor overall survival in the external validation set (hazard ratios ranged between 1·27 and 2·08, concordance index 0·71, 95% CI 0·68–0·73). When patients were segregated into three risk categories, median overall survival was 43·2 months (95% CI 31·4–50·1) in the favourable risk group (no risk factors; 157 patients), 22·5 months (18·7–25·1) in the intermediate risk group (one to two risk factors; 440 patients), and 7·8 months (6·5–9·7) in the poor risk group (three or more risk factors; 252 patients; p<0·0001; concordance index 0·664, 95% CI 0·639–0·689). 672 patients had complete data to test all five models. The concordance index of the CCF model was 0·662 (95% CI 0·636–0·687), of the French model 0·640 (0·614–0·665), of the IKCWG model 0·668 (0·645–0·692), and of the MSKCC model 0·657 (0·632–0·682). The reported versus predicted number of deaths at 2 years was most similar in the Database Consortium model compared with the other models. Interpretation The Database Consortium model is now externally validated and can be applied to stratify patients by risk in clinical trials and to counsel patients about prognosis. PMID:23312463
Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Duh, Mei Sheng; Merrigan, J F Philip; Schatz, Michael
2016-07-01
Current chronic obstructive pulmonary disease (COPD) exacerbation risk prediction models are based on clinical data not easily accessible to national quality-of-care organizations and payers. Models developed from data sources available to these organizations are needed. This study aimed to validate a risk measure constructed using pharmacy claims in patients with COPD. Administrative claims data were used to construct a risk model to test and validate the ratio of controller (maintenance) medications to total COPD medications (CTR) as an independent risk measure for COPD exacerbations. The ability of the CTR to predict the risk of COPD exacerbations was also assessed. This was a retrospective study using health insurance claims data from the Truven MarketScan database (2006-2011), whereby exacerbation risk factors of patients with COPD were observed over a 12-month period and exacerbations monitored in the following year. Exacerbations were defined as moderate (emergency department or outpatient treatment with oral corticosteroid dispensings within 7 d) or severe (hospital admission) on the basis of diagnosis codes. Models were developed and validated using split-sample data from the MarketScan database and further validated using the Reliant Medical Group database. The performance of prediction models was evaluated using C-statistics. A total of 258,668 patients with COPD from the MarketScan database were included. A CTR of greater than or equal to 0.3 was significantly associated with a reduced risk for any (adjusted odds ratio [OR], 0.91; 95% confidence interval [CI], 0.85-0.97); moderate (OR, 0.93; 95% CI, 0.87-1.00), or severe (OR, 0.87; 95% CI, 0.80-0.95) exacerbation. The CTR, at a ratio of greater than or equal to 0.3, was predictive in various subpopulations, including those without a history of asthma and those with or without a history of moderate/severe exacerbations. The C-statistics ranged from 0.750 to 0.761 for the development set and 0.714 to 0.761 in the validation sets, indicating the CTR performed well in predicting exacerbation risk. The ratio of controller to total medications dispensed for COPD is a measure that can easily be calculated using only pharmacy claims data. A CTR of greater than or equal to 0.3 can potentially be used as a quality-of-care measurement for prevention of exacerbations.
Veneman, Jolien B; Saetnan, Eli R; Clare, Amanda J; Newbold, Charles J
2016-12-01
The body of peer-reviewed papers on enteric methane mitigation strategies in ruminants is rapidly growing and allows for better estimation of the true effect of each strategy though the use of meta-analysis methods. Here we present the development of an online database of measured methane mitigation strategies called MitiGate, currently comprising 412 papers. The database is accessible through an online user-friendly interface that allows data extraction with various levels of aggregation on one hand and data-uploading for submission to the database allowing for future refinement and updates of mitigation estimates as well as providing easy access to relevant data for integration into modelling efforts or policy recommendations. To demonstrate and verify the usefulness of the MitiGate database those studies where methane emissions were expressed per unit of intake (293 papers resulting in 845 treatment comparisons) were used in a meta-analysis. The meta-analysis of the current database estimated the effect size of each of the mitigation strategies as well as the associated variance and measure of heterogeneity. Currently, under-representation of certain strategies, geographic regions and long term studies are the main limitations in providing an accurate quantitative estimation of the mitigation potential of each strategy under varying animal production systems. We have thus implemented the facility for researchers to upload meta-data of their peer reviewed research through a simple input form in the hope that MitiGate will grow into a fully inclusive resource for those wishing to model methane mitigation strategies in ruminants. Copyright © 2016 Elsevier B.V. All rights reserved.
1989-02-01
satisfies these criteria and is a major3 reason for the enduring popularity of the Prairie Grass database. Taller or slightly less homogeneous vegetation only...by California Measurements, Inc. (Sierra Madre , CA). The cascade impactor of the PC-2 is comprised of ten aerodynamic inertial impactors arranged in
DEEP: Database of Energy Efficiency Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon
A database of energy efficiency performance (DEEP) is a presimulated database to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 10 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER [sic] prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones.more » DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air conditioning, plug loads, and domestic hot war. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of the CEC PIER project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users' decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit.« less
A BRDF-BPDF database for the analysis of Earth target reflectances
NASA Astrophysics Data System (ADS)
Breon, Francois-Marie; Maignan, Fabienne
2017-01-01
Land surface reflectance is not isotropic. It varies with the observation geometry that is defined by the sun, view zenith angles, and the relative azimuth. In addition, the reflectance is linearly polarized. The reflectance anisotropy is quantified by the bidirectional reflectance distribution function (BRDF), while its polarization properties are defined by the bidirectional polarization distribution function (BPDF). The POLDER radiometer that flew onboard the PARASOL microsatellite remains the only space instrument that measured numerous samples of the BRDF and BPDF of Earth targets. Here, we describe a database of representative BRDFs and BPDFs derived from the POLDER measurements. From the huge number of data acquired by the spaceborne instrument over a period of 7 years, we selected a set of targets with high-quality observations. The selection aimed for a large number of observations, free of significant cloud or aerosol contamination, acquired in diverse observation geometries with a focus on the backscatter direction that shows the specific hot spot signature. The targets are sorted according to the 16-class International Geosphere-Biosphere Programme (IGBP) land cover classification system, and the target selection aims at a spatial representativeness within the class. The database thus provides a set of high-quality BRDF and BPDF samples that can be used to assess the typical variability of natural surface reflectances or to evaluate models. It is available freely from the PANGAEA website (doi:10.1594/PANGAEA.864090). In addition to the database, we provide a visualization and analysis tool based on the Interactive Data Language (IDL). It allows an interactive analysis of the measurements and a comparison against various BRDF and BPDF analytical models. The present paper describes the input data, the selection principles, the database format, and the analysis tool
2014-01-01
Background Impairment in activities of daily living (ADL) is an important predictor of outcomes although many administrative databases lack information on ADL function. We evaluated the impact of ADL function on predicting postoperative mortality among older adults with hip fractures in Ontario, Canada. Methods Sociodemographic and medical correlates of ADL impairment were first identified in a population of older adults with hip fractures who had ADL information available prior to hip fracture. A logistic regression model was developed to predict 360-day postoperative mortality and the predictive ability of this model were compared when ADL impairment was included or omitted from the model. Results The study sample (N = 1,329) had a mean age of 85.2 years, were 72.8% female and the majority resided in long-term care (78.5%). Overall, 36.4% of individuals died within 360 days of surgery. After controlling for age, sex, medical comorbidity and medical conditions correlated with ADL impairment, addition of ADL measures improved the logistic regression model for predicting 360 day mortality (AIC = 1706.9 vs. 1695.0; c -statistic = 0.65 vs 0.67; difference in - 2 log likelihood ratios: χ2 = 16.9, p = 0.002). Conclusions Direct measures of ADL impairment provides additional prognostic information on mortality for older adults with hip fractures even after controlling for medical comorbidity. Observational studies using administrative databases without measures of ADLs may be potentially prone to confounding and bias and case-mix adjustment for hip fracture outcomes should include ADL measures where these are available. PMID:24472282
Correction of electronic record for weighing bucket precipitation gauge measurements
USDA-ARS?s Scientific Manuscript database
Electronic sensors generate valuable streams of forcing and validation data for hydrologic models, but are often subject to noise, which must be removed as part of model input and testing database development. We developed Automated Precipitation Correction Program (APCP) for weighting bucket preci...
NASA Astrophysics Data System (ADS)
Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Rupp, Matthias; Teetz, Wolfram; Brandmaier, Stefan; Abdelaziz, Ahmed; Prokopenko, Volodymyr V.; Tanchuk, Vsevolod Y.; Todeschini, Roberto; Varnek, Alexandre; Marcou, Gilles; Ertl, Peter; Potemkin, Vladimir; Grishina, Maria; Gasteiger, Johann; Schwab, Christof; Baskin, Igor I.; Palyulin, Vladimir A.; Radchenko, Eugene V.; Welsh, William J.; Kholodovych, Vladyslav; Chekmarev, Dmitriy; Cherkasov, Artem; Aires-de-Sousa, Joao; Zhang, Qing-You; Bender, Andreas; Nigsch, Florian; Patiny, Luc; Williams, Antony; Tkachenko, Valery; Tetko, Igor V.
2011-06-01
The Online Chemical Modeling Environment is a web-based platform that aims to automate and simplify the typical steps required for QSAR modeling. The platform consists of two major subsystems: the database of experimental measurements and the modeling framework. A user-contributed database contains a set of tools for easy input, search and modification of thousands of records. The OCHEM database is based on the wiki principle and focuses primarily on the quality and verifiability of the data. The database is tightly integrated with the modeling framework, which supports all the steps required to create a predictive model: data search, calculation and selection of a vast variety of molecular descriptors, application of machine learning methods, validation, analysis of the model and assessment of the applicability domain. As compared to other similar systems, OCHEM is not intended to re-implement the existing tools or models but rather to invite the original authors to contribute their results, make them publicly available, share them with other users and to become members of the growing research community. Our intention is to make OCHEM a widely used platform to perform the QSPR/QSAR studies online and share it with other users on the Web. The ultimate goal of OCHEM is collecting all possible chemoinformatics tools within one simple, reliable and user-friendly resource. The OCHEM is free for web users and it is available online at http://www.ochem.eu.
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Impact of Fission Neutron Energies on Reactor Antineutrino Spectra
NASA Astrophysics Data System (ADS)
Hermanek, Keith; Littlejohn, Bryce; Gustafson, Ian
2017-09-01
Recent measurements of the reactor antineutrino spectra (Double Chooz, Reno, and Daya Bay) have shown a discrepancy in the 5-7 MeV region when compared to current theoretical models (Vogel and Huber-Mueller). There are numerous theories pertaining to this antineutrino anomaly, including theories that point to new physics beyond the standard model. In the paper ``Possible Origins and Implications of the Shoulder in Reactor Neutrino Spectra'' by A. Hayes et al., explanations for this anomaly are suggested. One theory is that there are interactions from fast and epithermal incident neutrons which are significant enough to create more events in the 5-7 MeV by a noticeable amount. In our research, we used the Oklo software network created by Dan Dwyer. This generates ab initio antineutrino and beta decay spectra based on standard fission yield databases ENDF, JENDL, JEFF, and the beta decay transition database ENSDF-6. Utilizing these databases as inputs, we show with reasonable assumptions one can prove contributions of fast and epithermal neutrons is less than 3% in the 5-7 MeV region. We also discovered rare isotopes are present in beta decay chains but not well measured and have no corresponding database information, and studied its effect onto the spectrum.
NASA Astrophysics Data System (ADS)
Fontaine, Alain; Sauvage, Bastien; Pétetin, Hervé; Auby, Antoine; Boulanger, Damien; Thouret, Valerie
2016-04-01
Since 1994, the IAGOS program (In-Service Aircraft for a Global Observing System http://www.iagos.org) and its predecessor MOZAIC has produced in-situ measurements of the atmospheric composition during more than 46000 commercial aircraft flights. In order to help analyzing these observations and further understanding the processes driving their evolution, we developed a modelling tool SOFT-IO quantifying their source/receptor link. We improved the methodology used by Stohl et al. (2003), based on the FLEXPART plume dispersion model, to simulate the contributions of anthropogenic and biomass burning emissions from the ECCAD database (http://eccad.aeris-data.fr) to the measured carbon monoxide mixing ratio along each IAGOS flight. Thanks to automated processes, contributions are simulated for the last 20 days before observation, separating individual contributions from the different source regions. The main goal is to supply add-value products to the IAGOS database showing pollutants geographical origin and emission type. Using this information, it may be possible to link trends in the atmospheric composition to changes in the transport pathways and to the evolution of emissions. This tool could be used for statistical validation as well as for inter-comparisons of emission inventories using large amounts of data, as Lagrangian models are able to bring the global scale emissions down to a smaller scale, where they can be directly compared to the in-situ observations from the IAGOS database.
NASA Astrophysics Data System (ADS)
Wilcox, William Edward, Jr.
1995-01-01
A computer program (LIDAR-PC) and associated atmospheric spectral databases have been developed which accurately simulate the laser remote sensing of the atmosphere and the system performance of a direct-detection Lidar or tunable Differential Absorption Lidar (DIAL) system. This simulation program allows, for the first time, the use of several different large atmospheric spectral databases to be coupled with Lidar parameter simulations on the same computer platform to provide a real-time, interactive, and easy to use design tool for atmospheric Lidar simulation and modeling. LIDAR -PC has been used for a range of different Lidar simulations and compared to experimental Lidar data. In general, the simulations agreed very well with the experimental measurements. In addition, the simulation offered, for the first time, the analysis and comparison of experimental Lidar data to easily determine the range-resolved attenuation coefficient of the atmosphere and the effect of telescope overlap factor. The software and databases operate on an IBM-PC or compatible computer platform, and thus are very useful to the research community for Lidar analysis. The complete Lidar and atmospheric spectral transmission modeling program uses the HITRAN database for high-resolution molecular absorption lines of the atmosphere, the BACKSCAT/LOWTRAN computer databases and models for the effects of aerosol and cloud backscatter and attenuation, and the range-resolved Lidar equation. The program can calculate the Lidar backscattered signal-to-noise for a slant path geometry from space and simulate the effect of high resolution, tunable, single frequency, and moderate line width lasers on the Lidar/DIAL signal. The program was used to model and analyze the experimental Lidar data obtained from several measurements. A fixed wavelength, Ho:YSGG aerosol Lidar (Sugimoto, 1990) developed at USF and a tunable Ho:YSGG DIAL system (Cha, 1991) for measuring atmospheric water vapor at 2.1 μm were analyzed. The simulations agreed very well with the measurements, and also yielded, for the first time, the ability to easily deduce the atmospheric attentuation coefficient, alpha, from the Lidar data. Simulations and analysis of other Lidar measurements included that of a 1.57 mu m OPO aerosol Lidar system developed at USF (Harrell, 1995) and of the NASA LITE (Laser-in-Space Technology Experiment) Lidar recently flown in the Space shuttle. Finally, an extensive series of laboratory experiments were made with the 1.57 μm OPO Lidar system to test calculations of the telescope/laser overlap and the effect of different telescope sizes and designs. The simulations agreed well with the experimental data for the telescope diameter and central obscuration test cases. The LIDAR-PC programs are available on the Internet from the USAF Lidar Home Page Web site, http://www.cas.usf.edu/physics/lidar.html/.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid
2018-04-01
Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external analysis of the performance of models at stations with similar climatic conditions denoted the applicability of nearby station' data for estimation of the daily ETo at target station.
A TEX86 surface sediment database and extended Bayesian calibration
NASA Astrophysics Data System (ADS)
Tierney, Jessica E.; Tingley, Martin P.
2015-06-01
Quantitative estimates of past temperature changes are a cornerstone of paleoclimatology. For a number of marine sediment-based proxies, the accuracy and precision of past temperature reconstructions depends on a spatial calibration of modern surface sediment measurements to overlying water temperatures. Here, we present a database of 1095 surface sediment measurements of TEX86, a temperature proxy based on the relative cyclization of marine archaeal glycerol dialkyl glycerol tetraether (GDGT) lipids. The dataset is archived in a machine-readable format with geospatial information, fractional abundances of lipids (if available), and metadata. We use this new database to update surface and subsurface temperature calibration models for TEX86 and demonstrate the applicability of the TEX86 proxy to past temperature prediction. The TEX86 database confirms that surface sediment GDGT distribution has a strong relationship to temperature, which accounts for over 70% of the variance in the data. Future efforts, made possible by the data presented here, will seek to identify variables with secondary relationships to GDGT distributions, such as archaeal community composition.
NASA Astrophysics Data System (ADS)
Bilitza, Dieter
2017-04-01
The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.
This presentation describes EPA efforts to collect, model, and measure publically available consumer product data for use in exposure assessment. The development of the ORD Chemicals and Products database will be described, as will machine-learning based models for predicting ch...
Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D
2017-05-25
The Claim Evaluation Tools database contains multiple-choice items for measuring people's ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Most of the items conformed well to the Rasch model's expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.
2017-12-01
We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.
Second-Tier Database for Ecosystem Focus, 2003-2004 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
University of Washington, Columbia Basin Research, DART Project Staff,
2004-12-01
The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities essential to sound operational and resource management. The database also assists with juvenile and adult mainstem passage modeling supporting federal decisions affecting the operation of the FCRPS. The Second-Tier Database known as Data Access in Real Time (DART) integrates public data for effective access, consideration and application. DART also provides analysis tools and performance measures for evaluating the condition of Columbia Basin salmonid stocks. These services are critical tomore » BPA's implementation of its fish and wildlife responsibilities under the Endangered Species Act (ESA).« less
Validation of multi-mission satellite altimetry for the Baltic Sea region
NASA Astrophysics Data System (ADS)
Kudryavtseva, Nadia; Soomere, Tarmo; Giudici, Andrea
2016-04-01
Currently, three sources of wave data are available for the research community, namely, buoys, modelling, and satellite altimetry. The buoy measurements provide high-quality time series of wave properties but they are deployed only in a few locations. Wave modelling covers large domains and provides good results for the open sea conditions. However, the limitation of modelling is that the results are dependent on wind quality and assumptions put into the model. Satellite altimetry in many occasions provides homogeneous data over large sea areas with an appreciable spatial and temporal resolution. The use of satellite altimetry is problematic in coastal areas and partially ice-covered water bodies. These limitations can be circumvented by careful analysis of the geometry of the basin, ice conditions and spatial coverage of each altimetry snapshot. In this poster, for the first time, we discuss a validation of 30 years of multi-mission altimetry covering the whole Baltic Sea. We analysed data from RADS database (Scharroo et al. 2013) which span from 1985 to 2015. To assess the limitations of the satellite altimeter data quality, the data were cross-matched with available wave measurements from buoys of the Swedish Meteorological and Hydrological Institute and Finnish Meteorological Institute. The altimeter-measured significant wave heights showed a very good correspondence with the wave buoys. We show that the data with backscatter coefficients more than 13.5 and high errors in significant wave heights and range should be excluded. We also examined the effect of ice cover and distance from the land on satellite altimetry measurements. The analysis of cross-matches between the satellite altimetry data and buoys' measurements shows that the data are only corrupted in the nearshore domain within 0.2 degrees from the coast. The statistical analysis showed a significant decrease in wave heights for sea areas with ice concentration more than 30 percent. We also checked and corrected the data for biases between different missions. This analysis provides a unique uniform database of satellite altimetry measurements over the whole Baltic Sea, which can be further used for finding biases in wave modelling and studies of wave climatology. The database is available upon request.
Specifying the ISS Plasma Environment
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Diekmann, Anne; Neergaard, Linda; Bui, Them; Mikatarian, Ronald; Barsamian, Hagop; Koontz, Steven
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IN) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output.
Evaluation of Two Numerical Wave Models with Inlet Physical Model
2005-07-01
GHOST in inlets and near structures compared slightly better with measurements. DOI: 10.1061/~ASCE!0733-950X~2005!131:4~149! CE Database subject headings...full directional spectrum. GHOST represents wave diffraction by implementing a formulation of the Eikonal equation ~Rivero et al. 1997a, b!, whereas
NASA Technical Reports Server (NTRS)
Reehorst, Andrew; Potapczuk, Mark; Ratvasky, Thomas; Laflin, Brenda Gile
1997-01-01
The purpose of this report is to release the data from the NASA Langley/Lewis 14 by 22 foot wind tunnel test that examined icing effects on a 1/8 scale twin-engine short-haul jet transport model. Presented in this document are summary data from the major configurations tested. The entire test database in addition to ice shape and model measurements is available as a data supplement in CD-ROM form. Data measured and presented are: wing pressure distributions, model force and moment, and wing surface flow visualization.
Gater, Adam; Kitchen, Helen; Heron, Louise; Pollard, Catherine; Håkan-Bloch, Jonas; Højbjerre, Lise; Hansen, Brian Bekker; Strandberg-Larsen, Martin
2015-01-01
The primary objective of this review is to develop a conceptual model for Crohn's disease (CD) outlining the disease burden for patients, healthcare systems and wider society, as reported in the scientific literature. A search was conducted using MEDLINE, PsycINFO, EconLit, Health Economic Evaluation Database and Centre for Reviews and Dissemination databases. Patient-reported outcome (PRO) measures widely used in CD were reviewed according to the US FDA PRO Guidance for Industry. The resulting conceptual model highlights the characterization of CD by gastrointestinal disturbances, extra-intestinal and systemic symptoms. These symptoms impact physical functioning, ability to complete daily activities, emotional wellbeing, social functioning, sexual functioning and ability to work. Gaps in conceptual coverage and evidence of reliability and validity for some PRO measures were noted. Review findings also highlight the substantial direct and indirect costs associated with CD. Evidence from the literature confirms the substantial burden of CD to patients and wider society; however, future research is still needed to further understand burden from the perspective of patients and to accurately understand the economic burden of disease. Challenges with existing PRO measures also suggest the need for future research to refine or develop new measures.
The hormesis database: the occurrence of hormetic dose responses in the toxicological literature.
Calabrese, Edward J; Blain, Robyn B
2011-10-01
In 2005 we published an assessment of dose responses that satisfied a priori evaluative criteria for inclusion within the relational retrieval hormesis database (Calabrese and Blain, 2005). The database included information on study characteristics (e.g., biological model, gender, age and other relevant aspects, number of doses, dose distribution/range, quantitative features of the dose response, temporal features/repeat measures, and physical/chemical properties of the agents). The 2005 article covered information for about 5000 dose responses; the present article has been expanded to cover approximately 9000 dose responses. This assessment extends and strengthens the conclusion of the 2005 paper that the hormesis concept is broadly generalizable, being independent of biological model, endpoint measured and chemical class/physical agent. It also confirmed the definable quantitative features of hormetic dose responses in which the strong majority of dose responses display maximum stimulation less than twice that of the control group and a stimulatory width that is within approximately 10-20-fold of the estimated toxicological or pharmacological threshold. The remarkable consistency of the quantitative features of the hormetic dose response suggests that hormesis may provide an estimate of biological plasticity that is broadly generalized across plant, microbial and animal (invertebrate and vertebrate) models. Copyright © 2011 Elsevier Inc. All rights reserved.
Models, Tools, and Databases for Land and Waste Management Research
These publicly available resources can be used for such tasks as simulating biodegradation or remediation of contaminants such as hydrocarbons, measuring sediment accumulation at superfund sites, or assessing toxicity and risk.
2011-01-01
used in efforts to develop QSAR models. Measurement of Repellent Efficacy Screening for Repellency of Compounds with Unknown Toxicology In screening...CPT) were used to develop Quantitative Structure Activity Relationship ( QSAR ) models to predict repellency. Successful prediction of novel...acylpiperidine QSAR models employed 4 descriptors to describe the relationship between structure and repellent duration. The ANN model of the carboxamides did not
Evaluation of a vortex-based subgrid stress model using DNS databases
NASA Technical Reports Server (NTRS)
Misra, Ashish; Lund, Thomas S.
1996-01-01
The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).
NASA Astrophysics Data System (ADS)
Sherwood, Owen A.; Schwietzke, Stefan; Arling, Victoria A.; Etiope, Giuseppe
2017-08-01
The concentration of atmospheric methane (CH4) has more than doubled over the industrial era. To help constrain global and regional CH4 budgets, inverse (top-down) models incorporate data on the concentration and stable carbon (δ13C) and hydrogen (δ2H) isotopic ratios of atmospheric CH4. These models depend on accurate δ13C and δ2H end-member source signatures for each of the main emissions categories. Compared with meticulous measurement and calibration of isotopic CH4 in the atmosphere, there has been relatively less effort to characterize globally representative isotopic source signatures, particularly for fossil fuel sources. Most global CH4 budget models have so far relied on outdated source signature values derived from globally nonrepresentative data. To correct this deficiency, we present a comprehensive, globally representative end-member database of the δ13C and δ2H of CH4 from fossil fuel (conventional natural gas, shale gas, and coal), modern microbial (wetlands, rice paddies, ruminants, termites, and landfills and/or waste) and biomass burning sources. Gas molecular compositional data for fossil fuel categories are also included with the database. The database comprises 10 706 samples (8734 fossil fuel, 1972 non-fossil) from 190 published references. Mean (unweighted) δ13C signatures for fossil fuel CH4 are significantly lighter than values commonly used in CH4 budget models, thus highlighting potential underestimation of fossil fuel CH4 emissions in previous CH4 budget models. This living database will be updated every 2-3 years to provide the atmospheric modeling community with the most complete CH4 source signature data possible. Database digital object identifier (DOI): https://doi.org/10.15138/G3201T.
IUEAGN: A database of ultraviolet spectra of active galactic nuclei
NASA Technical Reports Server (NTRS)
Pike, G.; Edelson, R.; Shull, J. M.; Saken, J.
1993-01-01
In 13 years of operation, IUE has gathered approximately 5000 spectra of almost 600 Active Galactic Nuclei (AGN). In order to undertake AGN studies which require large amounts of data, we are consistently reducing this entire archive and creating a homogeneous, easy-to-use database. First, the spectra are extracted using the Optimal extraction algorithm. Continuum fluxes are then measured across predefined bands, and line fluxes are measured with a multi-component fit. These results, along with source information such as redshifts and positions, are placed in the IUEAGN relational database. Analysis algorithms, statistical tests, and plotting packages run within the structure, and this flexible database can accommodate future data when they are released. This archival approach has already been used to survey line and continuum variability in six bright Seyfert 1s and rapid continuum variability in 14 blazars. Among the results that could only be obtained using a large archival study is evidence that blazars show a positive correlation between degree of variability and apparent luminosity, while Seyfert 1s show an anti-correlation. This suggests that beaming dominates the ultraviolet properties for blazars, while thermal emission from an accretion disk dominates for Seyfert 1s. Our future plans include a survey of line ratios in Seyfert 1s, to be fitted with photoionization models to test the models and determine the range of temperatures, densities and ionization parameters. We will also include data from IRAS, Einstein, EXOSAT, and ground-based telescopes to measure multi-wavelength correlations and broadband spectral energy distributions.
Using the gini coefficient to measure the chemical diversity of small-molecule libraries.
Weidlich, Iwona E; Filippov, Igor V
2016-08-15
Modern databases of small organic molecules contain tens of millions of structures. The size of theoretically available chemistry is even larger. However, despite the large amount of chemical information, the "big data" moment for chemistry has not yet provided the corresponding payoff of cheaper computer-predicted medicine or robust machine-learning models for the determination of efficacy and toxicity. Here, we present a study of the diversity of chemical datasets using a measure that is commonly used in socioeconomic studies. We demonstrate the use of this diversity measure on several datasets that were constructed to contain various congeneric subsets of molecules as well as randomly selected molecules. We also apply our method to a number of well-known databases that are frequently used for structure-activity relationship modeling. Our results show the poor diversity of the common sources of potential lead compounds compared to actual known drugs. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
EDDIX--a database of ionisation double differential cross sections.
MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H
2011-02-01
The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.
Mathematical modeling improves EC50 estimations from classical dose-response curves.
Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar
2015-03-01
The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.
Delye, Hans; Clijmans, Tim; Mommaerts, Maurice Yves; Sloten, Jos Vnder; Goffin, Jan
2015-12-01
Finite element models (FEMs) of the head are used to study the biomechanics of traumatic brain injury and depend heavily on the use of accurate material properties and head geometry. Any FEM aimed at investigating traumatic head injury in children should therefore use age-specific dimensions of the head, as well as age-specific material properties of the different tissues. In this study, the authors built a database of age-corrected skull geometry, skull thickness, and bone density of the developing skull to aid in the development of an age-specific FEM of a child's head. Such a database, containing age-corrected normative skull geometry data, can also be used for preoperative surgical planning and postoperative long-term follow-up of craniosynostosis surgery results. Computed tomography data were processed for 187 patients (age range 0-20 years old). A 3D surface model was calculated from segmented skull surfaces. Skull models, reference points, and sutures were processed into a MATLAB-supported database. This process included automatic calculation of 2D measurements as well as 3D measurements: length of the coronal suture, length of the lambdoid suture, and the 3D anterior-posterior length, defined as the sum of the metopic and sagittal suture. Skull thickness and skull bone density calculations were included. Cephalic length, cephalic width, intercoronal distance, lateral orbital distance, intertemporal distance, and 3D measurements were obtained, confirming the well-established general growth pattern of the skull. Skull thickness increases rapidly in the first year of life, slowing down during the second year of life, while skull density increases with a fast but steady pace during the first 3 years of life. Both skull thickness and density continue to increase up to adulthood. This is the first report of normative data on 2D and 3D measurements, skull bone thickness, and skull bone density for children aged 0-20 years. This database can help build an age-specific FEM of a child's head. It can also help to tailor preoperative virtual planning in craniosynostosis surgery toward patient-specific normative target values and to perform objective long-term follow-up in craniosynostosis surgery.
Measuring Research Data Uncertainty in the 2010 NRC Assessment of Geography Graduate Education
ERIC Educational Resources Information Center
Shortridge, Ashton; Goldsberry, Kirk; Weessies, Kathleen
2011-01-01
This article characterizes and measures errors in the 2010 National Research Council (NRC) assessment of research-doctorate programs in geography. This article provides a conceptual model for data-based sources of uncertainty and reports on a quantitative assessment of NRC research data uncertainty for a particular geography doctoral program.…
Collisional excitation of molecules in dense interstellar clouds
NASA Technical Reports Server (NTRS)
Green, S.
1985-01-01
State transitions which permit the identification of the molecular species in dense interstellar clouds are reviewed, along with the techniques used to calculate the transition energies, the database on known molecular transitions and the accuracy of the values. The transition energies cannot be measured directly and therefore must be modeled analytically. Scattering theory is used to determine the intermolecular forces on the basis of quantum mechanics. The nuclear motions can also be modeled with classical mechanics. Sample rate constants are provided for molecular systems known to inhabit dense interstellar clouds. The values serve as a database for interpreting microwave and RF astrophysical data on the transitions undergone by interstellar molecules.
Advancing the large-scale CCS database for metabolomics and lipidomics at the machine-learning era.
Zhou, Zhiwei; Tu, Jia; Zhu, Zheng-Jiang
2018-02-01
Metabolomics and lipidomics aim to comprehensively measure the dynamic changes of all metabolites and lipids that are present in biological systems. The use of ion mobility-mass spectrometry (IM-MS) for metabolomics and lipidomics has facilitated the separation and the identification of metabolites and lipids in complex biological samples. The collision cross-section (CCS) value derived from IM-MS is a valuable physiochemical property for the unambiguous identification of metabolites and lipids. However, CCS values obtained from experimental measurement and computational modeling are limited available, which significantly restricts the application of IM-MS. In this review, we will discuss the recently developed machine-learning based prediction approach, which could efficiently generate precise CCS databases in a large scale. We will also highlight the applications of CCS databases to support metabolomics and lipidomics. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Database as a Service for the Healthcare System to Store Physiological Signal Data.
Chang, Hsien-Tsung; Lin, Tsai-Huei
2016-01-01
Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.
A Database as a Service for the Healthcare System to Store Physiological Signal Data
Lin, Tsai-Huei
2016-01-01
Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415
NREL: Renewable Resource Data Center - Solar Resource Models and Tools
Solar Resource Models and Tools The Renewable Resource Data Center (RReDC) features the following -supplied hourly average measured global horizontal data. NSRDB Data Viewer Visualize, explore, and download solar resource data from the National Solar Radiation Database. PVWatts® Calculator PVWattsÂ
ERIC Educational Resources Information Center
McConky, Katie Theresa
2013-01-01
This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…
Use of MERRA-2 in the National Solar Radiation Database and Beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Lopez, Anthony; Habte, Aron
The National Solar Radiation Database (NSRDB) is a flagship product of NREL that provides solar radiation and ancillary meteorological information through a GIS based portal. This data is provided at a 4kmx4km spatial and 30 minute temporal resolution covering the period between 1998-2015. The gridded data that is distributed by the NSRDB is derived from satellite measurements using the Physical Solar Model (PSM) that contains a 2-stage approach. This 2-stage approach consists of first retrieving cloud properties using measurement from the GOES series of satellites and using that information in a radiative transfer model to estimate solar radiation at themore » surface. In addition to the satellite data the model requires ancillary meteorological information that is provided mainly by NASA's Modern Era Retrospecitve Analysis for Research and Applications (MERRA-2) 2 model output. This presentation provides an insight into how the NSRDB is developed using the PSM and how the various sources of data including the MERRA-2 data is used during the process.« less
Takashima, S
2001-04-05
The large dipole moment of globular proteins has been well known because of the detailed studies using dielectric relaxation and electro-optical methods. The search for the origin of these dipolemoments, however, must be based on the detailed knowledge on protein structure with atomic resolutions. At present, we have two sources of information on the structure of protein molecules: (1) x-ray databases obtained in crystalline state; (2) NMR databases obtained in solution state. While x-ray databases consist of only one model, NMR databases, because of the fluctuation of the protein folding in solution, consist of a number of models, thus enabling the computation of dipole moment repeated for all these models. The aim of this work, using these databases, is the detailed investigation on the interdependence between the structure and dipole moment of protein molecules. The dipole moment of protein molecules has roughly two components: one dipole moment is due to surface charges and the other, core dipole moment, is due to polar groups such as N--H and C==O bonds. The computation of surface charge dipole moment consists of two steps: (A) calculation of the pK shifts of charged groups for electrostatic interactions and (B) calculation of the dipole moment using the pK corrected for electrostatic shifts. The dipole moments of several proteins were computed using both NMR and x-ray databases. The dipole moments of these two sets of calculations are, with a few exceptions, in good agreement with one another and also with measured dipole moments.
NASA Astrophysics Data System (ADS)
Sung, Keeyoon; Yu, Shanshan; Pearson, John; Pirali, Olivier; Kwabia Tchana, F.; Manceron, Laurent
2016-06-01
We have analyzed multiple spectra of high purity (99.5%) normal ammonia sample recorded at room temperatures using the FT-IR and AILES beamline at Synchrotron SOLEIL, France. More than 2830 line positions and intensities are measured for the inversion-rotation and rovibrational transitions in the 50 - 660 wn region. Quantum assignments were made for 2047 transitions from eight bands including four inversion-rotation bands (gs(a-s), νb{2}(a-s), 2νb{2}(a-s), and νb{4}(a-s)) and four ro-vibrational bands (νb{2} - gs, 2νb{2} - gs, νb{4} - νb{2}, and 2νb{2} -νb{4}), as well as covering more than 300 lines of ΔK = 3 forbidden transitions. Out of the eight bands, we note that 2νb{2} - νb{4} has not been listed in the HITRAN 2012 database. The measured line positions for the assigned transitions are in an excellent agreement (typically better than 0.001 wn) with the predictions from the empirical Hamiltonian model [S. Yu, J.C. Pearson, B.J. Drouin, et al.(2010)] in a wide range of J and K for all the eight bands. The comparison with the HITRAN 2012 database is also satisfactory, although systematic offsets are seen for transitions with high J and K and those from weak bands. However, differences of 20% or so are seen in line intensities for allowed transitions between the measurements and the model predictions, depending on the bands. We have also noticed that most of the intensity outliers in the Hamiltonian model predictions belong to transitions from gs(a-s) band. We present the final results of the FT-IR measurements of line positions and intensities, and their comparisons to the model predictions and the HITRAN 2012 database. Research described in this paper was performed at the Jet Propulsion Laboratory and California Institute of Technology, under contracts and cooperative agreements with the National Aeronautics and Space Administration.
Rapid Model Fabrication and Testing for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
2000-01-01
Advanced methods for rapid fabrication and instrumentation of hypersonic wind tunnel models are being developed and evaluated at NASA Langley Research Center. Rapid aeroheating model fabrication and measurement techniques using investment casting of ceramic test models and thermographic phosphors are reviewed. More accurate model casting techniques for fabrication of benchmark metal and ceramic test models are being developed using a combination of rapid prototype patterns and investment casting. White light optical scanning is used for coordinate measurements to evaluate the fabrication process and verify model accuracy to +/- 0.002 inches. Higher-temperature (<210C) luminescent coatings are also being developed for simultaneous pressure and temperature mapping, providing global pressure as well as global aeroheating measurements. Together these techniques will provide a more rapid and complete experimental aerodynamic and aerothermodynamic database for future aerospace vehicles.
Cavitation, Flow Structure and Turbulence in the Tip Region of a Rotor Blade
NASA Technical Reports Server (NTRS)
Wu, H.; Miorini, R.; Soranna, F.; Katz, J.; Michael, T.; Jessup, S.
2010-01-01
Objectives: Measure the flow structure and turbulence within a Naval, axial waterjet pump. Create a database for benchmarking and validation of parallel computational efforts. Address flow and turbulence modeling issues that are unique to this complex environment. Measure and model flow phenomena affecting cavitation within the pump and its effect on pump performance. This presentation focuses on cavitation phenomena and associated flow structure in the tip region of a rotor blade.
Global Precipitation Measurement: Methods, Datasets and Applications
NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Turk, Francis J.; Petersen, Walt; Hou, Arthur Y.; Garcia-Ortega, Eduardo; Machado, Luiz, A. T.; Angelis, Carlos F.; Salio, Paola; Kidd, Chris; Huffman, George J.;
2011-01-01
This paper reviews the many aspects of precipitation measurement that are relevant to providing an accurate global assessment of this important environmental parameter. Methods discussed include ground data, satellite estimates and numerical models. First, the methods for measuring, estimating, and modeling precipitation are discussed. Then, the most relevant datasets gathering precipitation information from those three sources are presented. The third part of the paper illustrates a number of the many applications of those measurements and databases. The aim of the paper is to organize the many links and feedbacks between precipitation measurement, estimation and modeling, indicating the uncertainties and limitations of each technique in order to identify areas requiring further attention, and to show the limits within which datasets can be used.
Toward An Unstructured Mesh Database
NASA Astrophysics Data System (ADS)
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
2014-05-01
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi-incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof
The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions andmore » 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit. DEEP will be migrated into the DEnCity - DOE’s Energy City, which integrates large-scale energy data for multi-purpose, open, and dynamic database leveraging diverse source of existing simulation data.« less
A solar radiation database for Chile.
Molina, Alejandra; Falvey, Mark; Rondanelli, Roberto
2017-11-01
Chile hosts some of the sunniest places on earth, which has led to a growing solar energy industry in recent years. However, the lack of high resolution measurements of solar irradiance becomes a critical obstacle for both financing and design of solar installations. Besides the Atacama Desert, Chile displays a large array of "solar climates" due to large latitude and altitude variations, and so provides a useful testbed for the development of solar irradiance maps. Here a new public database for surface solar irradiance over Chile is presented. This database includes hourly irradiance from 2004 to 2016 at 90 m horizontal resolution over continental Chile. Our results are based on global reanalysis data to force a radiative transfer model for clear sky solar irradiance and an empirical model based on geostationary satellite data for cloudy conditions. The results have been validated using 140 surface solar irradiance stations throughout the country. Model mean percentage error in hourly time series of global horizontal irradiance is only 0.73%, considering both clear and cloudy days. The simplicity and accuracy of the model over a wide range of solar conditions provides confidence that the model can be easily generalized to other regions of the world.
NASA Astrophysics Data System (ADS)
Mangosing, D. C.; Chen, G.; Kusterer, J.; Rinsland, P.; Perez, J.; Sorlie, S.; Parker, L.
2011-12-01
One of the objectives of the NASA Langley Research Center's MEaSURES project, "Creating a Unified Airborne Database for Model Assessment", is the development of airborne Earth System Data Records (ESDR) for the regional and global model assessment and validation activities performed by the tropospheric chemistry and climate modeling communities. The ongoing development of ADAM, a web site designed to access a unified, standardized and relational ESDR database, meets this objective. The ESDR database is derived from publically available data sets, from NASA airborne field studies to airborne and in-situ studies sponsored by NOAA, NSF, and numerous international partners. The ADAM web development activities provide an opportunity to highlight a growing synergy between the Airborne Science Data for Atmospheric Composition (ASD-AC) group at NASA Langley and the NASA Langley's Atmospheric Sciences Data Center (ASDC). These teams will collaborate on the ADAM web application by leveraging the state-of-the-art service and message-oriented data distribution architecture developed and implemented by ASDC and using a web-based tool provided by the ASD-AC group whose user interface accommodates the nuanced perspective of science users in the atmospheric chemistry and composition and climate modeling communities.
Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D
2017-01-01
Background The Claim Evaluation Tools database contains multiple-choice items for measuring people’s ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. Objectives To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. Participants We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Results Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Conclusion Most of the items conformed well to the Rasch model’s expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. PMID:28550019
NASA Astrophysics Data System (ADS)
Hidayat, Taufiq; Shishin, Denis; Decterov, Sergei A.; Hayes, Peter C.; Jak, Evgueni
2017-01-01
Uncertainty in the metal price and competition between producers mean that the daily operation of a smelter needs to target high recovery of valuable elements at low operating cost. Options for the improvement of the plant operation can be examined and decision making can be informed based on accurate information from laboratory experimentation coupled with predictions using advanced thermodynamic models. Integrated high-temperature experimental and thermodynamic modelling research on phase equilibria and thermodynamics of copper-containing systems have been undertaken at the Pyrometallurgy Innovation Centre (PYROSEARCH). The experimental phase equilibria studies involve high-temperature equilibration, rapid quenching and direct measurement of phase compositions using electron probe X-ray microanalysis (EPMA). The thermodynamic modelling deals with the development of accurate thermodynamic database built through critical evaluation of experimental data, selection of solution models, and optimization of models parameters. The database covers the Al-Ca-Cu-Fe-Mg-O-S-Si chemical system. The gas, slag, matte, liquid and solid metal phases, spinel solid solution as well as numerous solid oxide and sulphide phases are included. The database works within the FactSage software environment. Examples of phase equilibria data and thermodynamic models of selected systems, as well as possible implementation of the research outcomes to selected copper making processes are presented.
RATE Exposure Assessment Modules - EXA 408, EXA 409
EXA 408 – Interpreting Biomonitoring Data and Using Pharmacokinetic Modeling in Exposure Assessment Widespread acceptance and use of the CDC's National Health and Nutritional Examination Survey (NHANES) database, which, among other things, reports measured concentrations of...
Minnesota's Tech Prep Outcome Evaluation Model.
ERIC Educational Resources Information Center
Brown, James M.; Pucel, David; Twohig, Cathy; Semler, Steve; Kuchinke, K. Peter
1998-01-01
Describes the Minnesota Tech Prep Consortia Evaluation System, which collects outcomes data on enrollment, retention, related job placement, higher education, dropouts, and diplomas/degrees awarded. Explains outcome measures, database development, data collection and analysis methods, and remaining challenges. (SK)
Element Distribution in Silicon Refining: Thermodynamic Model and Industrial Measurements
NASA Astrophysics Data System (ADS)
Næss, Mari K.; Kero, Ida; Tranell, Gabriella; Tang, Kai; Tveit, Halvard
2014-11-01
To establish an overview of impurity elemental distribution among silicon, slag, and gas/fume in the refining process of metallurgical grade silicon (MG-Si), an industrial measurement campaign was performed at the Elkem Salten MG-Si plant in Norway. Samples of in- and outgoing mass streams, i.e., tapped Si, flux and cooling materials, refined Si, slag, and fume, were analyzed by high-resolution inductively coupled plasma mass spectrometry (HR-ICP-MS), with respect to 62 elements. The elemental distributions were calculated and the experimental data compared with equilibrium estimations based on commercial and proprietary, published databases and carried out using the ChemSheet software. The results are discussed in terms of boiling temperatures, vapor pressures, redox potentials, and activities of the elements. These model calculations indicate a need for expanded databases with more and reliable thermodynamic data for trace elements in general and fume constituents in particular.
NASA Astrophysics Data System (ADS)
Mínguez, Román; Montero, José-María; Fernández-Avilés, Gema
2013-04-01
Much work has been done in the context of the hedonic price theory to estimate the impact of air quality on housing prices. Research has employed objective measures of air quality, but only slightly confirms the hedonic theory in the best of cases: the implicit price function relating housing prices to air pollution will, ceteris paribus, be negatively sloped. This paper compares the performance of a spatial Durbin model when using both objective and subjective measures of pollution. On the one hand, we design an Air Pollution Indicator based on measured pollution as the objective measure of pollution. On the other hand, the subjective measure of pollution employed to characterize neighborhoods is the percentage of residents who declare that the neighborhood has serious pollution problems, the percentage being referred to as residents' perception of pollution. For comparison purposes, the empirical part of this research focuses on Madrid (Spain). The study employs a proprietary database containing information about the price and 27 characteristics of 11,796 owner-occupied single family homes. As far as the authors are aware, it is the largest database ever used to analyze the Madrid housing market. The results of the study clearly favor the use of subjective air quality measures.
Goverman, Jeremy; Mathews, Katie; Holavanahalli, Radha K; Vardanian, Andrew; Herndon, David N; Meyer, Walter J; Kowalske, Karen; Fauerbach, Jim; Gibran, Nicole S; Carrougher, Gretchen J; Amtmann, Dagmar; Schneider, Jeffrey C; Ryan, Colleen M
The National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) established the Burn Model System (BMS) in 1993 to improve the lives of burn survivors. The BMS program includes 1) a multicenter longitudinal database describing the functional and psychosocial recovery of burn survivors; 2) site-specific burn-related research; and 3) a knowledge dissemination component directed toward patients and providers. Output from each BMS component was analyzed. Database structure, content, and access procedures are described. Publications using the database were identified and categorized to illustrate the content area of the work. Unused areas of the database were identified for future study. Publications related to site-specific projects were cataloged. The most frequently cited articles are summarized to illustrate the scope of these projects. The effectiveness of dissemination activities was measured by quantifying website hits and information downloads. There were 25 NIDILRR-supported publications that utilized the database. These articles covered topics related to psychological outcomes, functional outcomes, community reintegration, and burn demographics. There were 172 site-specific publications; highly cited articles demonstrate a wide scope of study. For information dissemination, visits to the BMS website quadrupled between 2013 and 2014, with 124,063 downloads of educational material in 2014. The NIDILRR BMS program has played a major role in defining the course of burn recovery, and making that information accessible to the general public. The accumulating information in the database serves as a rich resource to the burn community for future study. The BMS is a model for collaborative research that is multidisciplinary and outcome focused.
The Master Lens Database and The Orphan Lenses Project
NASA Astrophysics Data System (ADS)
Moustakas, Leonidas
2012-10-01
Strong gravitational lenses are uniquely suited for the study of dark matter structure and substructure within massive halos of many scales, act as gravitational telescopes for distant faint objects, and can give powerful and competitive cosmological constraints. While hundreds of strong lenses are known to date, spanning five orders of magnitude in mass scale, thousands will be identified this decade. To fully exploit the power of these objects presently, and in the near future, we are creating the Master Lens Database. This is a clearinghouse of all known strong lens systems, with a sophisticated and modern database of uniformly measured and derived observational and lens-model derived quantities, using archival Hubble data across several instruments. This Database enables new science that can be done with a comprehensive sample of strong lenses. The operational goal of this proposal is to develop the process and the code to semi-automatically stage Hubble data of each system, create appropriate masks of the lensing objects and lensing features, and derive gravitational lens models, to provide a uniform and fairly comprehensive information set that is ingested into the Database. The scientific goal for this team is to use the properties of the ensemble of lenses to make a new study of the internal structure of lensing galaxies, and to identify new objects that show evidence of strong substructure lensing, for follow-up study. All data, scripts, masks, model setup files, and derived parameters, will be public, and free. The Database will be accessible online and through a sophisticated smartphone application, which will also be free.
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
International Shock-Wave Database: Current Status
NASA Astrophysics Data System (ADS)
Levashov, Pavel
2013-06-01
Shock-wave and related dynamic material response data serve for calibrating, validating, and improving material models over very broad regions of the pressure-temperature-density phase space. Since the middle of the 20th century vast amount of shock-wave experimental information has been obtained. To systemize it a number of compendiums of shock-wave data has been issued by LLNL, LANL (USA), CEA (France), IPCP and VNIIEF (Russia). In mid-90th the drawbacks of the paper handbooks became obvious, so the first version of the online shock-wave database appeared in 1997 (http://www.ficp.ac.ru/rusbank). It includes approximately 20000 experimental points on shock compression, adiabatic expansion, measurements of sound velocity behind the shock front and free-surface-velocity for more than 650 substances. This is still a useful tool for the shock-wave community, but it has a number of serious disadvantages which can't be easily eliminated: (i) very simple data format for points and references; (ii) minimalistic user interface for data addition; (iii) absence of history of changes; (iv) bad feedback from users. The new International Shock-Wave database (ISWdb) is intended to solve these and some other problems. The ISWdb project objectives are: (i) to develop a database on thermodynamic and mechanical properties of materials under conditions of shock-wave and other dynamic loadings, selected related quantities of interest, and the meta-data that describes the provenance of the measurements and material models; and (ii) to make this database available internationally through the Internet, in an interactive form. The development and operation of the ISWdb is guided by an advisory committee. The database will be installed on two mirrored web-servers, one in Russia and the other in USA (currently only one server is available). The database provides access to original experimental data on shock compression, non-shock dynamic loadings, isentropic expansion, measurements of sound speed in the Hugoniot state, and time-dependent free-surface or window-interface velocity profiles. Users are able to search the information in the database and obtain the experimental points in tabular or plain text formats directly via the Internet using common browsers. It is also possible to plot the experimental points for comparison with different approximations and results of equation-of-state calculations. The user can present the results of calculations in text or graphical forms and compare them with any experimental data available in the database. A short history of the shock-wave database will be presented and current possibilities of ISWdb will be demonstrated. Web-site of the project: http://iswdb.info. This work is supported by SNL contracts # 1143875, 1196352.
NASA Astrophysics Data System (ADS)
Allen, G. H.; Pavelsky, T.
2015-12-01
The width of a river reflects complex interactions between river water hydraulics and other physical factors like bank erosional resistance, sediment supply, and human-made structures. A broad range of fluvial process studies use spatially distributed river width data to understand and quantify flood hazards, river water flux, or fluvial greenhouse gas efflux. Ongoing technological advances in remote sensing, computing power, and model sophistication are moving river system science towards global-scale studies that aim to understand the Earth's fluvial system as a whole. As such, a global spatially distributed database of river location and width is necessary to better constrain these studies. Here we present the Global River Width from Landsat (GRWL) Database, the first global-scale database of river planform at mean discharge. With a resolution of 30 m, GRWL consists of 58 million measurements of river centerline location, width, and braiding index. In total, GRWL measures 2.1 million km of rivers wider than 30 m, corresponding to 602 thousand km2 of river water surface area, a metric used to calculate global greenhouse gas emissions from rivers to the atmosphere. Using data from GRWL, we find that ~20% of the world's rivers are located above 60ºN where little high quality information exists about rivers of any kind. Further, we find that ~10% of the world's large rivers are multichannel, which may impact the development of the new generation of regional and global hydrodynamic models. We also investigate the spatial controls of global fluvial geomorphology and river hydrology by comparing climate, topography, geology, and human population density to GRWL measurements. The GRWL Database will be made publically available upon publication to facilitate improved understanding of Earth's fluvial system. Finally, GRWL will be used as an a priori data for the joint NASA/CNES Surface Water and Ocean Topography (SWOT) Satellite Mission, planned for launch in 2020.
Wind Speed Dependence of Acoustic Ambient Vertical Directional Spectra at High Frequency
1989-05-26
the measurements, which is 8 to 32 kHz, is sufficiently high that the propagation is adequately modeled using the Eikonal equation approximation. 4 TD...level spectra were calculated from the resulting time series. Spectral levels at 8, 16, and 32 kHz were recorded in a database along with the wind...indications of biological or industrial contaminations were removed. The resulting database seen here contained 215 samples. 10 * TD 8565 0z 00 a.I. cn
Flight Data Reduction of Wake Velocity Measurements Using an Instrumented OV-10 Airplane
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.; Stuever, Robert A.; Stewart, Eric C.; Rivers, Robert A.
1999-01-01
A series of flight tests to measure the wake of a Lockheed C- 130 airplane and the accompanying atmospheric state have been conducted. A specially instrumented North American Rockwell OV-10 airplane was used to measure the wake and atmospheric conditions. An integrated database has been compiled for wake characterization and validation of wake vortex computational models. This paper describes the wake- measurement flight-data reduction process.
van Walraven, Carl; Austin, Peter C; Manuel, Douglas; Knoll, Greg; Jennings, Allison; Forster, Alan J
2010-12-01
Administrative databases commonly use codes to indicate diagnoses. These codes alone are often inadequate to accurately identify patients with particular conditions. In this study, we determined whether we could quantify the probability that a person has a particular disease-in this case renal failure-using other routinely collected information available in an administrative data set. This would allow the accurate identification of a disease cohort in an administrative database. We determined whether patients in a randomly selected 100,000 hospitalizations had kidney disease (defined as two or more sequential serum creatinines or the single admission creatinine indicating a calculated glomerular filtration rate less than 60 mL/min/1.73 m²). The independent association of patient- and hospitalization-level variables with renal failure was measured using a multivariate logistic regression model in a random 50% sample of the patients. The model was validated in the remaining patients. Twenty thousand seven hundred thirteen patients had kidney disease (20.7%). A diagnostic code of kidney disease was strongly associated with kidney disease (relative risk: 34.4), but the accuracy of the code was poor (sensitivity: 37.9%; specificity: 98.9%). Twenty-nine patient- and hospitalization-level variables entered the kidney disease model. This model had excellent discrimination (c-statistic: 90.1%) and accurately predicted the probability of true renal failure. The probability threshold that maximized sensitivity and specificity for the identification of true kidney disease was 21.3% (sensitivity: 80.0%; specificity: 82.2%). Multiple variables available in administrative databases can be combined to quantify the probability that a person has a particular disease. This process permits accurate identification of a disease cohort in an administrative database. These methods may be extended to other diagnoses or procedures and could both facilitate and clarify the use of administrative databases for research and quality improvement. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.
2014-02-01
Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying lateral boundary conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2001-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complemented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone and carbon monoxide vertical profiles. The results show performance is largely within uncertainty estimates for ozone from the Ozone Monitoring Instrument and carbon monoxide from the Measurements Of Pollution In The Troposphere (MOPITT), but there were some notable biases compared with Tropospheric Emission Spectrometer (TES) ozone. Compared with TES, our ozone predictions are high-biased in the upper troposphere, particularly in the south during January. This publication documents the global simulation database, the tool for conversion to LBC, and the evaluation of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.
NASA Astrophysics Data System (ADS)
Larrañeta, M.; Moreno-Tejera, S.; Lillo-Bravo, I.; Silva-Pérez, M. A.
2018-02-01
Many of the available solar radiation databases only provide global horizontal irradiance (GHI) while there is a growing need of extensive databases of direct normal radiation (DNI) mainly for the development of concentrated solar power and concentrated photovoltaic technologies. In the present work, we propose a methodology for the generation of synthetic DNI hourly data from the hourly average GHI values by dividing the irradiance into a deterministic and stochastic component intending to emulate the dynamics of the solar radiation. The deterministic component is modeled through a simple classical model. The stochastic component is fitted to measured data in order to maintain the consistency of the synthetic data with the state of the sky, generating statistically significant DNI data with a cumulative frequency distribution very similar to the measured data. The adaptation and application of the model to the location of Seville shows significant improvements in terms of frequency distribution over the classical models. The proposed methodology applied to other locations with different climatological characteristics better results than the classical models in terms of frequency distribution reaching a reduction of the 50% in the Finkelstein-Schafer (FS) and Kolmogorov-Smirnov test integral (KSI) statistics.
Interactive Database of Pulsar Flux Density Measurements
NASA Astrophysics Data System (ADS)
Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.
2012-12-01
The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.
Development of New Jersey rates for the NJCMS incident delay model.
DOT National Transportation Integrated Search
2012-09-01
This study developed a working database for calculating incident rates and related delay measures, which contains incident related data collected from various data sources, such as the New Jersey Department of Transportation (NJDOT) Crash Records, Tr...
INNOVATIVE METHODS FOR EMISSION INVENTORY DEVELOPMENT AND EVALUATION: WORKSHOP SYNTHESIS
Emission inventories are key databases for evaluating, managing, and regulating air pollutants. Refinements and innovations in instruments that measure air pollutants, models that calculate emissions, and techniques for data management and uncertainty assessment are critical to ...
The IT in Secondary Science Book. A Compendium of Ideas for Using Computers and Teaching Science.
ERIC Educational Resources Information Center
Frost, Roger
Scientists need to measure and communicate, to handle information, and model ideas. In essence, they need to process information. Young scientists have the same needs. Computers have become a tremendously important addition to the processing of information through database use, graphing and modeling and also in the collection of information…
Computational assessment of model-based wave separation using a database of virtual subjects.
Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried
2017-11-07
The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Magnetics Information Consortium (MagIC)
NASA Astrophysics Data System (ADS)
Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.
2003-12-01
The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.
Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S
2005-06-01
The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA
Java Web Simulation (JWS); a web based database of kinetic models.
Snoep, J L; Olivier, B G
2002-01-01
Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.
Voss, Frank; Maule, Alec
2013-01-01
A model for simulating daily maximum and mean water temperatures was developed by linking two existing models: one developed by the U.S. Geological Survey and one developed by the Bureau of Reclamation. The study area included the lower Yakima River main stem between the Roza Dam and West Richland, Washington. To automate execution of the labor-intensive models, a database-driven model automation program was developed to decrease operation costs, to reduce user error, and to provide the capability to perform simulations quickly for multiple management and climate change scenarios. Microsoft© SQL Server 2008 R2 Integration Services packages were developed to (1) integrate climate, flow, and stream geometry data from diverse sources (such as weather stations, a hydrologic model, and field measurements) into a single relational database; (2) programmatically generate heavily formatted model input files; (3) iteratively run water temperature simulations; (4) process simulation results for export to other models; and (5) create a database-driven infrastructure that facilitated experimentation with a variety of scenarios, node permutations, weather data, and hydrologic conditions while minimizing costs of running the model with various model configurations. As a proof-of-concept exercise, water temperatures were simulated for a "Current Conditions" scenario, where local weather data from 1980 through 2005 were used as input, and for "Plus 1" and "Plus 2" climate warming scenarios, where the average annual air temperatures used in the Current Conditions scenario were increased by 1degree Celsius (°C) and by 2°C, respectively. Average monthly mean daily water temperatures simulated for the Current Conditions scenario were compared to measured values at the Bureau of Reclamation Hydromet gage at Kiona, Washington, for 2002-05. Differences ranged between 1.9° and 1.1°C for February, March, May, and June, and were less than 0.8°C for the remaining months of the year. The difference between current conditions and measured monthly values for the two warmest months (July and August) were 0.5°C and 0.2°C, respectively. The model predicted that water temperature generally becomes less sensitive to air temperature increases as the distance from the mouth of the river decreases. As a consequence, the difference between climate warming scenarios also decreased. The pattern of decreasing sensitivity is most pronounced from August to October. Interactive graphing tools were developed to explore the relative sensitivity of average monthly and mean daily water temperature to increases in air temperature for model output locations along the lower Yakima River main stem.
NASA Galactic Cosmic Radiation Environment Model: Badhwar - O'Neill (2014)
NASA Technical Reports Server (NTRS)
Golge, S.; O'Neill, P. M.; Slaba, T. C.
2015-01-01
The Badhwar-O'Neill (BON) Galactic Cosmic Ray (GCR) flux model has been used by NASA to certify microelectronic systems and in the analysis of radiation health risks for human space flight missions. Of special interest to NASA is the kinetic energy region below 4.0 GeV/n due to the fact that exposure from GCR behind shielding (e.g., inside a space vehicle) is heavily influenced by the GCR particles from this energy domain. The BON model numerically solves the Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration under the assumption of a spherically symmetric heliosphere. The model utilizes a comprehensive database of GCR measurements from various particle detectors to determine boundary conditions. By using an updated GCR database and improved model fit parameters, the new BON model (BON14) is significantly improved over the previous BON models for describing the GCR radiation environment of interest to human space flight.
NASA Galactic Cosmic Radiation Environment Model: Badhwar-O'Neill (2014)
NASA Technical Reports Server (NTRS)
O'Neill, P. M.; Golge, S.; Slaba, T. C.
2015-01-01
The Badhwar-O'Neill (BON) Galactic Cosmic Ray (GCR) flux model is used by NASA to certify microelectronic systems and in the analysis of radiation health risks for human space flight missions. Of special interest to NASA is the kinetic energy region below 4.0 GeV/n due to the fact that exposure from GCR behind shielding (e.g., inside a space vehicle) is heavily influenced by the GCR particles from this energy domain. The BON model numerically solves the Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration under the assumption of a spherically symmetric heliosphere. The model utilizes a GCR measurements database from various particle detectors to determine the boundary conditions. By using an updated GCR database and improved model fit parameters, the new BON model (BON14) is significantly improved over the previous BON models for describing the GCR radiation environment of interest to human space flight.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
Friesen, Melissa C.; Locke, Sarah J.; Chen, Yu-Cheng; Coble, Joseph B.; Stewart, Patricia A.; Ji, Bu-Tian; Bassig, Bryan; Lu, Wei; Xue, Shouzheng; Chow, Wong-Ho; Lan, Qing; Purdue, Mark P.; Rothman, Nathaniel; Vermeulen, Roel
2015-01-01
Purpose: Trichloroethylene (TCE) is a carcinogen that has been linked to kidney cancer and possibly other cancer sites including non-Hodgkin lymphoma. Its use in China has increased since the early 1990s with China’s growing metal, electronic, and telecommunications industries. We examined historical occupational TCE air concentration patterns in a database of TCE inspection measurements collected in Shanghai, China to identify temporal trends and broad contrasts among occupations and industries. Methods: Using a database of 932 short-term, area TCE air inspection measurements collected in Shanghai worksites from 1968 through 2000 (median year 1986), we developed mixed-effects models to evaluate job-, industry-, and time-specific TCE air concentrations. Results: Models of TCE air concentrations from Shanghai work sites predicted that exposures decreased 5–10% per year between 1968 and 2000. Measurements collected near launderers and dry cleaners had the highest predicted geometric means (GM for 1986 = 150–190mg m−3). The majority (53%) of the measurements were collected in metal treatment jobs. In a model restricted to measurements in metal treatment jobs, predicted GMs for 1986 varied 35-fold across industries, from 11mg m−3 in ‘other metal products/repair’ industries to 390mg m–3 in ‘ships/aircrafts’ industries. Conclusions: TCE workplace air concentrations appeared to have dropped over time in Shanghai, China between 1968 and 2000. Understanding differences in TCE concentrations across time, occupations, and industries may assist future epidemiologic studies in China. PMID:25180291
Friesen, Melissa C; Locke, Sarah J; Chen, Yu-Cheng; Coble, Joseph B; Stewart, Patricia A; Ji, Bu-Tian; Bassig, Bryan; Lu, Wei; Xue, Shouzheng; Chow, Wong-Ho; Lan, Qing; Purdue, Mark P; Rothman, Nathaniel; Vermeulen, Roel
2015-01-01
Trichloroethylene (TCE) is a carcinogen that has been linked to kidney cancer and possibly other cancer sites including non-Hodgkin lymphoma. Its use in China has increased since the early 1990s with China's growing metal, electronic, and telecommunications industries. We examined historical occupational TCE air concentration patterns in a database of TCE inspection measurements collected in Shanghai, China to identify temporal trends and broad contrasts among occupations and industries. Using a database of 932 short-term, area TCE air inspection measurements collected in Shanghai worksites from 1968 through 2000 (median year 1986), we developed mixed-effects models to evaluate job-, industry-, and time-specific TCE air concentrations. Models of TCE air concentrations from Shanghai work sites predicted that exposures decreased 5-10% per year between 1968 and 2000. Measurements collected near launderers and dry cleaners had the highest predicted geometric means (GM for 1986 = 150-190 mg m(-3)). The majority (53%) of the measurements were collected in metal treatment jobs. In a model restricted to measurements in metal treatment jobs, predicted GMs for 1986 varied 35-fold across industries, from 11 mg m(-3) in 'other metal products/repair' industries to 390 mg m(-3) in 'ships/aircrafts' industries. TCE workplace air concentrations appeared to have dropped over time in Shanghai, China between 1968 and 2000. Understanding differences in TCE concentrations across time, occupations, and industries may assist future epidemiologic studies in China. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
Monitoring groundwater and river interaction along the Hanford reach of the Columbia River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, M.D.
1994-04-01
As an adjunct to efficient Hanford Site characterization and remediation of groundwater contamination, an automatic monitor network has been used to measure Columbia River and adjacent groundwater levels in several areas of the Hanford Site since 1991. Water levels, temperatures, and electrical conductivity measured by the automatic monitor network provided an initial database with which to calibrate models and from which to infer ground and river water interactions for site characterization and remediation activities. Measurements of the dynamic river/aquifer system have been simultaneous at 1-hr intervals, with a quality suitable for hydrologic modeling and for computer model calibration and testing.more » This report describes the equipment, procedures, and results from measurements done in 1993.« less
Trends in measurement models and methods in understanding occupational health psychology.
Tetrick, Lois E
2017-07-01
Measurement of occupational health psychology constructs is the cornerstone to developing our understanding of occupational health and safety. It also is critical in the design, evaluation, and implementation of interventions to improve employees and organizations well-being. The purpose of this article is a brief review of the current state of measurement theory and practice in occupational health psychology. Also included are a discussion of development of newer measurement models and methods, which are in use in other disciplines of psychology, but have not been incorporated into the occupational health psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Vehicle noise source heights & sub-source spectra
DOT National Transportation Integrated Search
1996-12-01
This report describes a turn-key system that was developed and implemented to collect the vehicle source height database for incorporation into the new Traffic Noise Model (TNM). A total of 2500 individual vehicle pass-bys were measured with this sys...
Hackstadt, Amber J; Peng, Roger D
2014-11-01
Time series studies have suggested that air pollution can negatively impact health. These studies have typically focused on the total mass of fine particulate matter air pollution or the individual chemical constituents that contribute to it, and not source-specific contributions to air pollution. Source-specific contribution estimates are useful from a regulatory standpoint by allowing regulators to focus limited resources on reducing emissions from sources that are major contributors to air pollution and are also desired when estimating source-specific health effects. However, researchers often lack direct observations of the emissions at the source level. We propose a Bayesian multivariate receptor model to infer information about source contributions from ambient air pollution measurements. The proposed model incorporates information from national databases containing data on both the composition of source emissions and the amount of emissions from known sources of air pollution. The proposed model is used to perform source apportionment analyses for two distinct locations in the United States (Boston, Massachusetts and Phoenix, Arizona). Our results mirror previous source apportionment analyses that did not utilize the information from national databases and provide additional information about uncertainty that is relevant to the estimation of health effects.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander
2015-11-01
Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-01-01
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-08-28
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.
Modelling Precipitation Kinetics During Aging of Al-Mg-Si Alloys
NASA Astrophysics Data System (ADS)
Du, Qiang; Friis, Jepser
A classical Kaufmann-Wagner numerical model is employed to predict the evolution of precipitate size distribution during the aging treatment of Al-Mg-Si alloys. One feature of the model is its fully coupling with CALPHAD database, and with the input of interfacial energy from ab-initial calculation, it is able to capture the morphological change of the precipitates. The simulation results will be compared with the experimental measurements.
NASA Astrophysics Data System (ADS)
Harden, Jennifer W.; Hugelius, Gustaf; Koven, Charlie; Sulman, Ben; O'Donnell, Jon; He, Yujie
2016-04-01
Soils are capacitors for carbon and water entering and exiting through land-atmosphere exchange. Capturing the spatiotemporal variations in soil C exchange through monitoring and modeling is difficult in part because data are reported unevenly across spatial, temporal, and management scales and in part because the unit of measure generally involves destructive harvest or non-recurrent measurements. In order to improve our fundamental basis for understanding soil C exchange, a multi-user, open source, searchable database and network of scientists has been formed. The International Soil Carbon Network (ISCN) is a self-chartered, member-based and member-owned network of scientists dedicated to soil carbon science. Attributes of the ISCN include 1) Targeted ISCN Action Groups which represent teams of motivated researchers that propose and pursue specific soil C research questions with the aim of synthesizing seminal articles regarding soil C fate. 2) Datasets to date contributed by institutions and individuals to a comprehensive, searchable open-access database that currently includes over 70,000 geolocated profiles for which soil C and other soil properties. 3) Derivative products resulting from the database, including depth attenuation attributes for C concentration and storage; C storage maps; and model-based assessments of emission/sequestration for future climate scenarios. Several examples illustrate the power of such a database and its engagement with the science community. First, a simplified, data-constrained global ecosystem model estimated a global sensitivity of permafrost soil carbon to climate change (g sensitivity) of -14 to -19 Pg C °C-1 of warming on a 100 years time scale. Second, using mathematical characterizations of depth profiles for organic carbon storage, C at the soil surface reflects Net Primary Production (NPP) and its allotment as moss or litter, while e-folding depths are correlated to rooting depth. Third, storage of deep C is highly correlated with bulk density and porosity of the rock/sediment matrix. Thus C storage is most stable at depth, yet is susceptible to changes in tillage, rooting depths, and erosion/sedimentation. Fourth, current ESMs likely overestimate the turnover time of soil organic carbon and subsequently overestimate soil carbon sequestration, thus datasets combined with other soil properties will help constrain the ESM predictions. Last, analysis of soil horizon and carbon data showed that soils with a history of tillage had significantly lower carbon concentrations in both near-surface and deep layers, and that the effect persisted even in reforested areas. In addition to the opportunities for empirical science using a large database, the database has great promise for evaluation of biogeochemical and earth system models. The preservation of individual soil core measurements avoids issues with spatial averaging while facilitating evaluation of advanced model processes such as depth distributions of soil carbon, land use impacts, and spatial heterogeneity.
Recent Developments of the GLIMS Glacier Database
NASA Astrophysics Data System (ADS)
Raup, B. H.; Berthier, E.; Bolch, T.; Kargel, J. S.; Paul, F.; Racoviteanu, A.
2017-12-01
Earth's glaciers are shrinking almost without exception, leading to changes in water resources, timing of runoff, sea level, and hazard potential. Repeat mapping of glacier outlines, lakes, and glacier topography, along with glacial processes, is critically needed to understand how glaciers will react to a changing climate, and how those changes will impact humans. To understand the impacts and processes behind the observed changes, it is crucial to monitor glaciers through time by mapping their areal extent, snow lines, ice flow velocities, associated water bodies, and thickness changes. The glacier database of the Global Land Ice Measurements from Space (GLIMS) initiative is the only multi-temporal glacier database capable of tracking all these glacier measurements and providing them to the scientific community and broader public.Recent developments in GLIMS include improvements in the database and web applications and new activities in the international GLIMS community. The coverage of the GLIMS database has recently grown geographically and temporally by drawing on the Randolph Glacier Inventory (RGI) and other new data sets. The GLIMS database is globally complete, and approximately one third of glaciers have outlines from more than one time. New tools for visualizing and downloading GLIMS data in a choice of formats and data models have been developed, and a new data model for handling multiple glacier records through time while avoiding double-counting of glacier number or area is nearing completion. A GLIMS workshop was held in Boulder, Colorado this year to facilitate two-way communication with the greater community on future needs.The result of this work is a more complete and accurate glacier data repository that shows both the current state of glaciers on Earth and how they have changed in recent decades. Needs for future scientific and technical developments were identified and prioritized at the GLIMS Workshop, and are reported here.
NASA Technical Reports Server (NTRS)
Gatebe, Charles K.; King, Michael D.
2016-01-01
In this paper we describe measurements of the bidirectional reflectance-distribution function (BRDF) acquired over a 30-year period (1984-2014) by the National Aeronautics and Space Administration's (NASA's) Cloud Absorption Radiometer (CAR). Our BRDF database encompasses various natural surfaces that are representative of many land cover or ecosystem types found throughout the world. CAR's unique measurement geometry allows a comparison of measurements acquired from different satellite instruments with various geometrical configurations, none of which are capable of obtaining such a complete and nearly instantaneous BRDF. This database is therefore of great value in validating many satellite sensors and assessing corrections of reflectances for angular effects. These data can also be used to evaluate the ability of analytical models to reproduce the observed directional signatures, to develop BRDF models that are suitable for sub-kilometer-scale satellite observations over both homogeneous and heterogeneous landscape types, and to test future spaceborne sensors. All of these BRDF data are publicly available and accessible in hierarchical data format (http:car.gsfc.nasa.gov/).
NASA Technical Reports Server (NTRS)
Young, Steve; UijtdeHaag, Maarten; Sayre, Jonathon
2003-01-01
Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Further, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a realtime monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around the Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.
A Framework for Cloudy Model Optimization and Database Storage
NASA Astrophysics Data System (ADS)
Calvén, Emilia; Helton, Andrew; Sankrit, Ravi
2018-01-01
We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
Benchmarking Using Basic DBMS Operations
NASA Astrophysics Data System (ADS)
Crolotte, Alain; Ghazal, Ahmad
The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.
Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.
2008-01-01
This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.
Validation of the kinetic-turbulent-neoclassical theory for edge intrinsic rotation in DIII-D
NASA Astrophysics Data System (ADS)
Ashourvan, Arash; Grierson, B. A.; Battaglia, D. J.; Haskey, S. R.; Stoltzfus-Dueck, T.
2018-05-01
In a recent kinetic model of edge main-ion (deuterium) toroidal velocity, intrinsic rotation results from neoclassical orbits in an inhomogeneous turbulent field [T. Stoltzfus-Dueck, Phys. Rev. Lett. 108, 065002 (2012)]. This model predicts a value for the toroidal velocity that is co-current for a typical inboard X-point plasma at the core-edge boundary (ρ ˜ 0.9). Using this model, the velocity prediction is tested on the DIII-D tokamak for a database of L-mode and H-mode plasmas with nominally low neutral beam torque, including both signs of plasma current. Values for the flux-surface-averaged main-ion rotation velocity in the database are obtained from the impurity carbon rotation by analytically calculating the main-ion—impurity neoclassical offset. The deuterium rotation obtained in this manner has been validated by direct main-ion measurements for a limited number of cases. Key theoretical parameters of ion temperature and turbulent scale length are varied across a wide range in an experimental database of discharges. Using a characteristic electron temperature scale length as a proxy for a turbulent scale length, the predicted main-ion rotation velocity has a general agreement with the experimental measurements for neutral beam injection (NBI) powers in the range PNBI < 4 MW. At higher NBI power, the experimental rotation is observed to saturate and even degrade compared to theory. TRANSP-NUBEAM simulations performed for the database show that for discharges with nominally balanced—but high powered—NBI, the net injected torque through the edge can exceed 1 Nm in the counter-current direction. The theory model has been extended to compute the rotation degradation from this counter-current NBI torque by solving a reduced momentum evolution equation for the edge and found the revised velocity prediction to be in agreement with experiment. Using the theory modeled—and now tested—velocity to predict the bulk plasma rotation opens up a path to more confidently projecting the confinement and stability in ITER.
NASA Astrophysics Data System (ADS)
Powell, C. J.; Jablonski, A.; Werner, W. S. M.; Smekal, W.
2005-01-01
We describe two NIST databases that can be used to characterize thin films from Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS) measurements. First, the NIST Electron Effective-Attenuation-Length Database provides values of effective attenuation lengths (EALs) for user-specified materials and measurement conditions. The EALs differ from the corresponding inelastic mean free paths on account of elastic-scattering of the signal electrons. The database supplies "practical" EALs that can be used to determine overlayer-film thicknesses. Practical EALs are plotted as a function of film thickness, and an average value is shown for a user-selected thickness. The average practical EAL can be utilized as the "lambda parameter" to obtain film thicknesses from simple equations in which the effects of elastic-scattering are neglected. A single average practical EAL can generally be employed for a useful range of film thicknesses and for electron emission angles of up to about 60°. For larger emission angles, the practical EAL should be found for the particular conditions. Second, we describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA) to be released in 2004. This database provides data for many parameters needed in quantitative AES and XPS (e.g., excitation cross-sections, electron-scattering cross-sections, lineshapes, fluorescence yields, and backscattering factors). Relevant data for a user-specified experiment are automatically retrieved by a small expert system. In addition, Auger electron and photoelectron spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra, and thus, provide more detailed characterizations of multilayer thin-film materials. SESSA can also provide practical EALs, and we compare values provided by the NIST EAL database and SESSA for hafnium dioxide. Differences of up to 10% were found for film thicknesses less than 20 Å due to the use of different physical models in each database.
Accelerating Information Retrieval from Profile Hidden Markov Model Databases.
Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem
2016-01-01
Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
Extraterrestrial cold chemistry. A need for a specific database.
NASA Astrophysics Data System (ADS)
Pernot, P.; Carrasco, N.; Dobrijevic, M.; Hébrard, E.; Plessis, S.; Wakelam, V.
2008-09-01
The major resource databases for building chemical models for photochemistry in cold environments are mainly based on those designed for Earth atmospheric chemistry or combustion, in which reaction rates are reported for temperatures typically above 300 K [1,2]. Kinetic data measured at low temperatures are very sparse; for instance, in stateoftheart photochemical models of Titan atmosphere, less than 10% of the rates have been measured in the relevant temperature range (100200 K) [35]. In consequence, photochemical models rely mostly on lowT extrapolations by Arrheniustype laws. There is more and more evidence that this is often inappropriate [6], and low T extrapolations are hindered by very high uncertainty [3] (Fig.1). The predictions of models based on those extrapolations are expected to be very inaccurate [4,7]. We argue that there is not much sense in increasing the complexity of the present models as long as this predictivity issue has not been resolved. Fig. 1 Uncertainty of low temperature extrapolation for the N(2D) +C2H4 reaction rate, from measurements in the range 225 292 K [10], assuming an Arrhenius law (blue line). The sample of rate laws is generated by Monte Carlo uncertainty propagation after a Bayesian Data reAnalysis (BDA) of experimental data. A dialogue between modellers and experimentalists is necessary to improve this situation. Considering the heavy costs of low temperature reaction kinetics experiments, the identification of key reactions has to be based on an optimal strategy to improve the predictivity of photochemical models. This can be achieved by global sensitivity analysis, as illustrated on Titan atmospheric chemistry [8]. The main difficulty of this scheme is that it requires a lot of inputs, mainly the evaluation of uncertainty for extrapolated reaction rates. Although a large part has already been achieved by Hébrard et al. [3], extension and validation requires a group of experts. A new generation of collaborative kinetic database is needed to implement efficiently this scheme. The KIDA project [9], initiated by V. Wakelam for astrochemistry, has been joined by planetologists with similar prospects. EuroPlaNet will contribute to this effort through the organization of comities of experts on specific processes in atmospheric photochemistry.
A Model Based Mars Climate Database for the Mission Design
NASA Technical Reports Server (NTRS)
2005-01-01
A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access
Model-Based, Noninvasive Monitoring of Intracranial Pressure
2012-10-01
nICP) estimate requires simultaneous measurement of the waveforms of arterial blood pressure ( ABP ), obtained via radial artery catheter or finger...initial database comprises subarachnoid hemorrhage patients in neuro-intensive care at our partner hospital, for whom ICP, ABP and CBFV are currently
Managing hydrological measurements for small and intermediate projects: RObsDat
NASA Astrophysics Data System (ADS)
Reusser, Dominik E.
2014-05-01
Hydrological measurements need good management for the data not to be lost. Multiple, often overlapping files from various loggers with heterogeneous formats need to be merged. Data needs to be validated and cleaned and subsequently converted to the format for the hydrological target application. Preferably, all these steps should be easily tracable. RObsDat is an R package designed to support such data management. It comes with a command line user interface to support hydrologists to enter and adjust their data in a database following the Observations Data Model (ODM) standard by QUASHI. RObsDat helps in the setup of the database within one of the free database engines MySQL, PostgreSQL or SQLite. It imports the controlled water vocabulary from the QUASHI web service and provides a smart interface between the hydrologist and the database: Already existing data entries are detected and duplicates avoided. The data import function converts different data table designes to make import simple. Cleaning and modifications of data are handled with a simple version control system. Variable and location names are treated in a user friendly way, accepting and processing multiple versions. A new development is the use of spacetime objects for subsequent processing.
New database for improving virtual system “body-dress”
NASA Astrophysics Data System (ADS)
Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.
2017-10-01
The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.
Databases for multilevel biophysiology research available at Physiome.jp.
Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki
2015-01-01
Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.
Specification of the ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of Occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. This technique, while developed specifically for the Space Station analysis, can also be generalized to provide ionospheric plasma environment risk specification models for low Earth orbit over an altitude range of 200 km through approximately 1000 km.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Setiyono, U.; Satake, K.; Fujii, Y.
2017-12-01
We built pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia. The tsunami database can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event near Pelabuhan Ratu. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occur south off Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well even though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.; ...
2017-08-02
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
A Novel Database to Rank and Display Archeomagnetic Intensity Data
NASA Astrophysics Data System (ADS)
Donadini, F.; Korhonen, K.; Riisager, P.; Pesonen, L. J.; Kahma, K.
2005-12-01
To understand the content and the causes of the changes in the Earth's magnetic field beyond the observatory records one has to rely on archeomagnetic and lake sediment paleomagnetic data. The regional archeointensity curves are often of different quality and temporally variable which hampers the global analysis of the data in terms of dipole vs non-dipole field. We have developed a novel archeointensity database application utilizing MySQL, PHP (PHP Hypertext Preprocessor), and the Generic Mapping Tools (GMT) for ranking and displaying geomagnetic intensity data from the last 12000 years. Our application has the advantage that no specific software is required to query the database and view the results. Querying the database is performed using any Web browser; a fill-out form is used to enter the site location and a minimum ranking value to select the data points to be displayed. The form also features the possibility to select plotting of the data as an archeointensity curve with error bars, and a Virtual Axial Dipole Moment (VADM) or ancient field value (Ba) curve calculated using the CALS7K model (Continuous Archaeomagnetic and Lake Sediment geomagnetic model) of (Korte and Constable, 2005). The results of a query are displayed on a Web page containing a table summarizing the query parameters, a table showing the archeointensity values satisfying the query parameters, and a plot of VADM or Ba as a function of sample age. The database consists of eight related tables. The main one, INTENSITIES, stores the 3704 archeointensity measurements collected from 159 publications as VADM (and VDM when available) and Ba values, including their standard deviations and sampling locations. It also contains the number of samples and specimens measured from each site. The REFS table stores the references to a particular study. The names, latitudes, and longitudes of the regions where the samples were collected are stored in the SITES table. The MATERIALS, METHODS, SPECIMEN_TYPES and DATING_METHODS tables store information about the sample materials, intensity determination methods, specimen types and age determination methods. The SIGMA_COUNT table is used indirectly for ranking data according to the number of samples measured and their standard deviations. Each intensity measurement is assigned a score (0--2) depending on the number of specimens measured and their standard deviations, the intensity determination method, the type of specimens measured and materials. The ranking of each data point is calculated as the sum of the four scores and varies between 0 and 8. Additionally, users can select the parameters that will be included in the ranking.
Data model and relational database design for the New England Water-Use Data System (NEWUDS)
Tessler, Steven
2001-01-01
The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.
Interactive Exploration for Continuously Expanding Neuron Databases.
Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting
2017-02-15
This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Applications of spatial statistical network models to stream data
Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal
2014-01-01
Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.
NASA Astrophysics Data System (ADS)
Prata, F.; Stebel, K.
2013-12-01
Over the last few years there has been a recognition of the utility of satellite measurements to identify and track volcanic emissions that present a natural hazard to human populations. Mitigation of the volcanic hazard to life and the environment requires understanding of the properties of volcanic emissions, identifying the hazard in near real-time and being able to provide timely and accurate forecasts to affected areas. Amongst the many ways to measure volcanic emissions, satellite remote sensing is capable of providing global quantitative retrievals of important microphysical parameters such as ash mass loading, ash particle effective radius, infrared optical depth, SO2 partial and total column abundance, plume altitude, aerosol optical depth and aerosol absorbing index. The eruption of Eyjafjallajokull in April-May, 2010 led to increased research and measurement programs to better characterize properties of volcanic ash and the need to establish a data-base in which to store and access these data was confirmed. The European Space Agency (ESA) has recognized the importance of having a quality controlled data-base of satellite retrievals and has funded an activity (VAST) to develop novel remote sensing retrieval schemes and a data-base, initially focused on several recent hazardous volcanic eruptions. As a first step, satellite retrievals for the eruptions of Eyjafjallajokull, Grimsvotn, Puyhue-Cordon Caulle, Nabro, Merapi, Okmok, Kasatochi and Sarychev Peak are being considered. Here we describe the data, retrievals and methods being developed for the data-base. Three important applications of the data-base are illustrated related to the ash/aviation problem, to the impact of the Merapi volcanic eruption on the local population, and to estimate SO2 fluxes from active volcanoes-as a means to diagnose future unrest. Dispersion model simulations are also being included in the data-base. In time, data from conventional in situ sampling instruments, airborne and ground-based remote sensing platforms and other meta-data (bulk ash and gas properties, volcanic setting, volcanic eruption chronologies, hazards and impacts etc.) will be added. The data-base has the potential to provide the natural hazards community with the first dynamic atmospheric volcanic hazards map and will be a valuable tool particularly for global transport.
Exploring Human Cognition Using Large Image Databases.
Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S
2016-07-01
Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. Copyright © 2016 Cognitive Science Society, Inc.
The BaMM web server for de-novo motif discovery and regulatory sequence analysis.
Kiesel, Anja; Roth, Christian; Ge, Wanwan; Wess, Maximilian; Meier, Markus; Söding, Johannes
2018-05-28
The BaMM web server offers four tools: (i) de-novo discovery of enriched motifs in a set of nucleotide sequences, (ii) scanning a set of nucleotide sequences with motifs to find motif occurrences, (iii) searching with an input motif for similar motifs in our BaMM database with motifs for >1000 transcription factors, trained from the GTRD ChIP-seq database and (iv) browsing and keyword searching the motif database. In contrast to most other servers, we represent sequence motifs not by position weight matrices (PWMs) but by Bayesian Markov Models (BaMMs) of order 4, which we showed previously to perform substantially better in ROC analyses than PWMs or first order models. To address the inadequacy of P- and E-values as measures of motif quality, we introduce the AvRec score, the average recall over the TP-to-FP ratio between 1 and 100. The BaMM server is freely accessible without registration at https://bammmotif.mpibpc.mpg.de.
Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon
2012-01-01
Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.
Caron, Alexandre; Clement, Guillaume; Heyman, Christophe; Aernout, Eva; Chazard, Emmanuel; Le Tertre, Alain
2015-01-01
Incompleteness of epidemiological databases is a major drawback when it comes to analyzing data. We conceived an epidemiological study to assess the association between newborn thyroid function and the exposure to perchlorates found in the tap water of the mother's home. Only 9% of newborn's exposure to perchlorate was known. The aim of our study was to design, test and evaluate an original method for imputing perchlorate exposure of newborns based on their maternity of birth. In a first database, an exhaustive collection of newborn's thyroid function measured during a systematic neonatal screening was collected. In this database the municipality of residence of the newborn's mother was only available for 2012. Between 2004 and 2011, the closest data available was the municipality of the maternity of birth. Exposure was assessed using a second database which contained the perchlorate levels for each municipality. We computed the catchment area of every maternity ward based on the French nationwide exhaustive database of inpatient stay. Municipality, and consequently perchlorate exposure, was imputed by a weighted draw in the catchment area. Missing values for remaining covariates were imputed by chained equation. A linear mixture model was computed on each imputed dataset. We compared odds ratios (ORs) and 95% confidence intervals (95% CI) estimated on real versus imputed 2012 data. The same model was then carried out for the whole imputed database. The ORs estimated on 36,695 observations by our multiple imputation method are comparable to the real 2012 data. On the 394,979 observations of the whole database, the ORs remain stable but the 95% CI tighten considerably. The model estimates computed on imputed data are similar to those calculated on real data. The main advantage of multiple imputation is to provide unbiased estimate of the ORs while maintaining their variances. Thus, our method will be used to increase the statistical power of future studies by including all 394,979 newborns.
ERIC Educational Resources Information Center
Castillo, Jose M.; March, Amanda L.; Stockslager, Kevin M.; Hines, Constance V.
2016-01-01
The "Perceptions of RtI Skills Survey" is a self-report measure that assesses educators' perceptions of their data-based problem-solving skills--a critical element of many Response-to-Intervention (RtI) models. Confirmatory factor analysis (CFA) was used to evaluate the underlying factor structure of this tool. Educators from 68 (n =…
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
ForC: a global database of forest carbon stocks and fluxes.
Anderson-Teixeira, Kristina J; Wang, Maria M H; McGarvey, Jennifer C; Herrmann, Valentine; Tepley, Alan J; Bond-Lamberty, Ben; LeBauer, David S
2018-06-01
Forests play an influential role in the global carbon (C) cycle, storing roughly half of terrestrial C and annually exchanging with the atmosphere more than five times the carbon dioxide (CO 2 ) emitted by anthropogenic activities. Yet, scaling up from field-based measurements of forest C stocks and fluxes to understand global scale C cycling and its climate sensitivity remains an important challenge. Tens of thousands of forest C measurements have been made, but these data have yet to be integrated into a single database that makes them accessible for integrated analyses. Here we present an open-access global Forest Carbon database (ForC) containing previously published records of field-based measurements of ecosystem-level C stocks and annual fluxes, along with disturbance history and methodological information. ForC expands upon the previously published tropical portion of this database, TropForC (https://doi.org/10.5061/dryad.t516f), now including 17,367 records (previously 3,568) representing 2,731 plots (previously 845) in 826 geographically distinct areas. The database covers all forested biogeographic and climate zones, represents forest stands of all ages, and currently includes data collected between 1934 and 2015. We expect that ForC will prove useful for macroecological analyses of forest C cycling, for evaluation of model predictions or remote sensing products, for quantifying the contribution of forests to the global C cycle, and for supporting international efforts to inventory forest carbon and greenhouse gas exchange. A dynamic version of ForC is maintained at on GitHub (https://GitHub.com/forc-db), and we encourage the research community to collaborate in updating, correcting, expanding, and utilizing this database. ForC is an open access database, and we encourage use of the data for scientific research and education purposes. Data may not be used for commercial purposes without written permission of the database PI. Any publications using ForC data should cite this publication and Anderson-Teixeira et al. (2016a) (see Metadata S1). No other copyright or cost restrictions are associated with the use of this data set. © 2018 by the Ecological Society of America.
The PMDB Protein Model Database
Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna
2006-01-01
The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873
CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.
Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola
2011-03-14
Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Black Carbon Concentration from Worldwide Aerosol Robotic Network (AERONET)
NASA Technical Reports Server (NTRS)
Schuster, Greg; Dubovik, Oleg; Holben, Brent; Clothiaux, Eugene
2008-01-01
Worldwide black carbon concentration measurements are needed to assess the efficacy of the carbon emissions inventory and transport model output. This requires long-term measurements in many regions, as model success in one region or season does not apply to all regions and seasons. AERONET is an automated network of more than 180 surface radiometers located throughout the world. The sky radiance measurements obtained by AERONET are inverted to provide column-averaged aerosol refractive indices and size distributions for the AERONET database, which we use to derive column-averaged black carbon concentrations and specific absorptions that are constrained by the measured radiation field. This provides a link between AERONET sky radiance measurements and the elemental carbon concentration of transport models without the need for an optics module in the transport model. Knowledge of both the black carbon concentration and aerosol absorption optical depth (i.e., input and output of the optics module) will enable improvements to the transport model optics module.
Urban Soil Hydrology: bridging the data gap with a nationwide field study
NASA Astrophysics Data System (ADS)
Schifman, L. A.; Shuster, W.
2016-12-01
Urban communities generally rely on hydrologic models or tools for assessing suitable sites for green infrastructure. These rainfall-runoff models, e.g. National Stormwater Calculator (NSWC), query soil hydrologic information from national databases, e.g. Soil Survey Geographic Database (SSURGO), or are estimated via pedotransfer-based algorithms like USDA Rosetta. As part of urban soil hydrologic assessments we have collected soil textural and hydrologic data in 12 cities throughout the United States and compared these measurements to NSWC and SSURGO queried infiltration rates (Kunsat) and Rosetta-estimated drainage rates (Ksat and Kunsat). We found that soil hydrologic parameters obtained through pedotransfer functions and queries to soil databases are not representative of field-measured values (RMSE range from 6.2 to 15.2 for infiltration and from 13.2 to 16.3 for drainage). Although the NSWC queries SSURGO, we found that SSURGO overestimates infiltration and NSWC underestimates with MEs of 4.9, and -1.4, respectively. In Rosetta, we found that pedotransfer functions overestimated drainage rates (MEs 1.8 to 3.8). In an attempt to improve drainage estimates using Rosetta the soil texture was adjusted in soils with an apparent portion of finer sands. Here, sand included: very coarse, coarse, and medium sand, whereas silt included fine, and very fine sand and silt, with the justification that fine sands behave similarly to silt. These adjusted estimates resulted in generally underestimating drainage and still not suitable for use in planning for stormwater detention (e.g., infiltrative green infrastructure). With this work we highlight the importance of obtaining field measured values when assessing sites for green infrastructure planning instead of relying on estimates, as the discrepancies in sensitive parameters such as Kunsat and Ksat, implications for parameter selection in error propagation through rainfall-runoff models, and consequences for over- or under-design of stormwater control measures for detention.
Source attribution using FLEXPART and carbon monoxide emission inventories: SOFT-IO version 1.0
NASA Astrophysics Data System (ADS)
Sauvage, Bastien; Fontaine, Alain; Eckhardt, Sabine; Auby, Antoine; Boulanger, Damien; Petetin, Hervé; Paugam, Ronan; Athier, Gilles; Cousin, Jean-Marc; Darras, Sabine; Nédélec, Philippe; Stohl, Andreas; Turquety, Solène; Cammas, Jean-Pierre; Thouret, Valérie
2017-12-01
Since 1994, the In-service Aircraft for a Global Observing System (IAGOS) program has produced in situ measurements of the atmospheric composition during more than 51 000 commercial flights. In order to help analyze these observations and understand the processes driving the observed concentration distribution and variability, we developed the SOFT-IO tool to quantify source-receptor links for all measured data. Based on the FLEXPART particle dispersion model (Stohl et al., 2005), SOFT-IO simulates the contributions of anthropogenic and biomass burning emissions from the ECCAD emission inventory database for all locations and times corresponding to the measured carbon monoxide mixing ratios along each IAGOS flight. Contributions are simulated from emissions occurring during the last 20 days before an observation, separating individual contributions from the different source regions. The main goal is to supply added-value products to the IAGOS database by evincing the geographical origin and emission sources driving the CO enhancements observed in the troposphere and lower stratosphere. This requires a good match between observed and modeled CO enhancements. Indeed, SOFT-IO detects more than 95 % of the observed CO anomalies over most of the regions sampled by IAGOS in the troposphere. In the majority of cases, SOFT-IO simulates CO pollution plumes with biases lower than 10-15 ppbv. Differences between the model and observations are larger for very low or very high observed CO values. The added-value products will help in the understanding of the trace-gas distribution and seasonal variability. They are available in the IAGOS database via http://www.iagos.org. The SOFT-IO tool could also be applied to similar data sets of CO observations (e.g., ground-based measurements, satellite observations). SOFT-IO could also be used for statistical validation as well as for intercomparisons of emission inventories using large amounts of data.
System Dynamics Aviation Readiness Modeling Demonstration
2005-08-31
requirements. It is recommended that the Naval Aviation Enterprise take a close look at the requirements i.e., performance measures, methodology ...unit’s capability to perform specific Joint Mission Essential Task List (JMETL) requirements now and in the future. This assessment methodology must...the time-associated costs. The new methodology must base decisions on currently available data and databases. A “useful” readiness model should be
ERIC Educational Resources Information Center
Pae, Hye K.
2012-01-01
The aim of this study was to apply Rasch modeling to an examination of the psychometric properties of the "Pearson Test of English Academic" (PTE Academic). Analyzed were 140 test-takers' scores derived from the PTE Academic database. The mean age of the participants was 26.45 (SD = 5.82), ranging from 17 to 46. Conformity of the participants'…
An editor for pathway drawing and data visualization in the Biopathways Workbench.
Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar
2009-10-02
Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.
NASA Astrophysics Data System (ADS)
Pecoraro, Gaetano; Calvello, Michele
2017-04-01
In Italy rainfall-induced landslides pose a significant and widespread hazard, resulting in a large number of casualties and enormous economic damages. Mitigation of such a diffuse risk cannot be attained with structural measures only. With respect to the risk to life, early warning systems represent a viable and useful tool for landslide risk mitigation over wide areas. Inventories of rainfall-induced landslides are critical to support investigations of where and when landslides have happened and may occur in the future, i.e. to establish reliable correlations between rainfall characteristics and landslide occurrences. In this work a parametric study has been conducted to evaluate the performance of correlation models between rainfall and landslides over the Italian territory using the "FraneItalia" database, an inventory of landslides retrieved from online Italian journalistic news. The information reported for each record of this database always include: the site of occurrence of the landslides, the date of occurrence, the source of the news. Multiple landslides occurring in the same date, within the same province or region, are inventoried together in one single record of the database, in this case also reporting the number of landslides of the event. Each record the database may also include, if the related information is available: hour of occurrence; typology, volume and material of the landslide; activity phase; effects on people, structures, infrastructures, cars or other elements. The database currently contains six complete years of data (2010-2015), including more than 4000 landslide reports, most of them triggered by rainfall. For the aim of this study, different rainfall-landslides correlation models have been tested by analysing the reported landslides, within all the 144 zones identified by the national civil protection for weather-related warnings in Italy, in relation to satellite-based precipitations estimates from the Global Precipitation Measurement (GPM) NASA mission. This remote sensing database contains gridded precipitation and precipitation-error estimates, with a half-hour temporal resolution and a 0.10-degree spatial resolution, covering most of the earth starting from 2014. It is well known that satellite estimates of rainfall have some limitations in resolving specific rainfall features (e.g., shallow orographic events and short-duration, high-intensity events), yet the temporal and spatial accuracy of the GPM data may be considered adequate in relation to the scale of the analysis and the size of the warning zones used for this study. The results of the parametric analysis conducted herein, although providing some indications on the most relevant rainfall conditions leading to widespread landsliding over a warning zone, must be considered preliminary as they show a very heterogeneous behaviour of the employed rainfall-based warning models over the Italian territory. Nevertheless, they clearly show the strong potential of the continuous multi-year landslide records available from the "FraneItalia" database as an important source of information to evaluate the performance of warning models at regional scale throughout Italy.
NASA Technical Reports Server (NTRS)
Selle, L. C.; Bellan, Josette
2006-01-01
Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate modeling.
Methodology of the determination of the uncertainties by using the biometric device the broadway 3D
NASA Astrophysics Data System (ADS)
Jasek, Roman; Talandova, Hana; Adamek, Milan
2016-06-01
The biometric identification by face is among one of the most widely used methods of biometric identification. Due to it provides a faster and more accurate identification; it was implemented into area of security 3D face reader by Broadway manufacturer was used to measure. It is equipped with the 3D camera system, which uses the method of structured light scanning and saves the template into the 3D model of face. The obtained data were evaluated by software Turnstile Enrolment Application (TEA). The measurements were used 3D face reader the Broadway 3D. First, the person was scanned and stored in the database. Thereafter person has already been compared with the stored template in the database for each method. Finally, a measure of reliability was evaluated for the Broadway 3D face reader.
Increasing situation awareness of the CBRNE robot operators
NASA Astrophysics Data System (ADS)
Jasiobedzki, Piotr; Ng, Ho-Kong; Bondy, Michel; McDiarmid, Carl H.
2010-04-01
Situational awareness of CBRN robot operators is quite limited, as they rely on images and measurements from on-board detectors. This paper describes a novel framework that enables a uniform and intuitive access to live and recent data via 2D and 3D representations of visited sites. These representations are created automatically and augmented with images, models and CBRNE measurements. This framework has been developed for CBRNE Crime Scene Modeler (C2SM), a mobile CBRNE mapping system. The system creates representations (2D floor plans and 3D photorealistic models) of the visited sites, which are then automatically augmented with CBRNE detector measurements. The data stored in a database is accessed using a variety of user interfaces providing different perspectives and increasing operators' situational awareness.
NASA Astrophysics Data System (ADS)
Othmanli, Hussein; Zhao, Chengyi; Stahr, Karl
2017-04-01
The Tarim River Basin is the largest continental basin in China. The region has extremely continental desert climate characterized by little rainfall <50 mm/a and high potential evaporation >3000 mm/a. The climate change is affecting severely the basin causing soil salinization, water shortage, and regression in crop production. Therefore, a Soil and Land Resources Information System (SLISYS-Tarim) for the regional simulation of crop yield production in the basin was developed. The SLISYS-Tarim consists of a database and an agro-ecological simulation model EPIC (Environmental Policy Integrated Climate). The database comprises relational tables including information about soils, terrain conditions, land use, and climate. The soil data implicate information of 50 soil profiles which were dug, analyzed, described and classified in order to characterize the soils in the region. DEM data were integrated with geological maps to build a digital terrain structure. Remote sensing data of Landsat images were applied for soil mapping, and for land use and land cover classification. An additional database for climate data, land management and crop information were linked to the system, too. Construction of the SLISYS-Tarim database was accomplished by integrating and overlaying the recommended thematic maps within environment of the geographic information system (GIS) to meet the data standard of the global and national SOTER digital database. This database forms appropriate input- and output data for the crop modelling with the EPIC model at various scales in the Tarim Basin. The EPIC model was run for simulating cotton production under a constructed scenario characterizing the current management practices, soil properties and climate conditions. For the EPIC model calibration, some parameters were adjusted so that the modeled cotton yield fits to the measured yield on the filed scale. The validation of the modeling results was achieved in a later step based on remote sensing data. The simulated cotton yield varied according to field management, soil type and salinity level, where soil salinity was the main limiting factor. Furthermore, the calibrated and validated EPIC model was run under several scenarios of climate conditions and land management practices to estimate the effect of climate change on cotton production and sustainability of agriculture systems in the basin. The application of SLISYS-Tarim showed that this database can be a suitable framework for storage and retrieval of soil and terrain data at various scales. The simulation with the EPIC model can assess the impact of climate change and management strategies. Therefore, SLISYS-Tarim can be a good tool for regional planning and serve the decision support system on regional and national scale.
Data-driven modelling of vertical dynamic excitation of bridges induced by people running
NASA Astrophysics Data System (ADS)
Racic, Vitomir; Morin, Jean Benoit
2014-02-01
With increasingly popular marathon events in urban environments, structural designers face a great deal of uncertainty when assessing dynamic performance of bridges occupied and dynamically excited by people running. While the dynamic loads induced by pedestrians walking have been intensively studied since the infamous lateral sway of the London Millennium Bridge in 2000, reliable and practical descriptions of running excitation are still very rare and limited. This interdisciplinary study has addressed the issue by bringing together a database of individual running force signals recorded by two state-of-the-art instrumented treadmills and two attempts to mathematically describe the measurements. The first modelling strategy is adopted from the available design guidelines for human walking excitation of structures, featuring perfectly periodic and deterministic characterisation of pedestrian forces presentable via Fourier series. This modelling approach proved to be inadequate for running loads due to the inherent near-periodic nature of the measured signals, a great inter-personal randomness of the dominant Fourier amplitudes and the lack of strong correlation between the amplitudes and running footfall rate. Hence, utilising the database established and motivated by the existing models of wind and earthquake loading, speech recognition techniques and a method of replicating electrocardiogram signals, this paper finally presents a numerical generator of random near-periodic running force signals which can reliably simulate the measurements. Such a model is an essential prerequisite for future quality models of dynamic loading induced by individuals, groups and crowds running under a wide range of conditions, such as perceptibly vibrating bridges and different combinations of visual, auditory and tactile cues.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Learning Computational Models of Video Memorability from fMRI Brain Imaging.
Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming
2015-08-01
Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.
Validation of the Dynamic Wake Meander model with focus on tower loads
NASA Astrophysics Data System (ADS)
Larsen, T. J.; Larsen, G. C.; Pedersen, M. M.; Enevoldsen, K.; Madsen, H. A.
2017-05-01
This paper presents a comparison between measured and simulated tower loads for the Danish offshore wind farm Nysted 2. Previously, only limited full scale experimental data containing tower load measurements have been published, and in many cases the measurements include only a limited range of wind speeds. In general, tower loads in wake conditions are very challenging to predict correctly in simulations. The Nysted project offers an improved insight to this field as six wind turbines located in the Nysted II wind farm have been instrumented to measure tower top and tower bottom moments. All recorded structural data have been organized in a database, which in addition contains relevant wind turbine SCADA data as well as relevant meteorological data - e.g. wind speed and wind direction - from an offshore mast located in the immediate vicinity of the wind farm. The database contains data from a period extending over a time span of more than 3 years. Based on the recorded data basic mechanisms driving the increased loading experienced by wind turbines operating in offshore wind farm conditions have been identified, characterized and modeled. The modeling is based on the Dynamic Wake Meandering (DWM) approach in combination with the state-of-the-art aeroelastic model HAWC2, and has previously as well as in this study shown good agreement with the measurements. The conclusions from the study have several parts. In general the tower bending and yaw loads show a good agreement between measurements and simulations. However, there are situations that are still difficult to match. One is tower loads of single-wake operation near rated ambient wind speed for single wake situations for spacing’s around 7-8D. A specific target of the study was to investigate whether the largest tower fatigue loads are associated with a certain downstream distance. This has been identified in both simulations and measurements, though a rather flat optimum is seen in the measurements.
Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean
NASA Astrophysics Data System (ADS)
Yin, Mengtao; Liu, Guosheng
2017-12-01
A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.
Surgeon-tool force/torque signatures--evaluation of surgical skills in minimally invasive surgery.
Rosen, J; MacFarlane, M; Richards, C; Hannaford, B; Sinanan, M
1999-01-01
The best method of training for laparoscopic surgical skills is controversial. Some advocate observation in the operating room, while others promote animal and simulated models or a combination of surgical related tasks. The mode of proficiency evaluation common to all of these methods has been subjective evaluation by a skilled surgeon. In order to define an objective means of evaluating performance, an instrumented laparoscopic grasper was developed measuring the force/torque at the surgeon hand/tool interface. The measured database demonstrated substantial differences between experienced and novice surgeon groups. Analyzing forces and torques combined with the state transition during surgical procedures allows an objective measurement of skill in MIS. Teaching the novice surgeon to limit excessive loads and improve movement efficiency during surgical procedures can potentially result in less injury to soft tissues and less wasted time during laparoscopic surgery. Moreover the force/torque database measured in this study may be used for developing realistic virtual reality simulators and optimization of medical robots performance.
NASA Technical Reports Server (NTRS)
Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie
2004-01-01
In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further details of the instrumentation used for insulator measurements and representative measurements of insulating spacecraft materials are provided in other Spacecraft Charging Conference presentations. The NASA Space Environments and Effects Program, the Air Force Office of Scientific Research, the Boeing Corporation, NASA Graduate Research Fellowships, and the NASA Rocky Mountain Space Grant Consortium have provided support.
Web application and database modeling of traffic impact analysis using Google Maps
NASA Astrophysics Data System (ADS)
Yulianto, Budi; Setiono
2017-06-01
Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.
In need of combined topography and bathymetry DEM
NASA Astrophysics Data System (ADS)
Kisimoto, K.; Hilde, T.
2003-04-01
In many geoscience applications, digital elevation models (DEMs) are now more commonly used at different scales and greater resolution due to the great advancement in computer technology. Increasing the accuracy/resolution of the model and the coverage of the terrain (global model) has been the goal of users as mapping technology has improved and computers get faster and cheaper. The ETOPO5 (5 arc minutes spatial resolution land and seafloor model), initially developed in 1988 by Margo Edwards, then at Washington University, St. Louis, MO, has been the only global terrain model for a long time, and it is now being replaced by three new topographic and bathymetric DEMs, i.e.; the ETOPO2 (2 arc minutes spatial resolution land and seafloor model), the GTOPO30 land model with a spatial resolution of 30 arc seconds (c.a. 1km at equator) and the 'GEBCO 1-MINUTE GLOBAL BATHYMETRIC GRID' ocean floor model with a spatial resolution of 1 arc minute (c.a. 2 km at equator). These DEMs are products of projects through which compilation and reprocessing of existing and/or new datasets were made to meet user's new requirements. These ongoing efforts are valuable and support should be continued to refine and update these DEMs. On the other hand, a different approach to create a global bathymetric (seafloor) database exists. A method to estimate the seafloor topography from satellite altimetry combined with existing ships' conventional sounding data was devised and a beautiful global seafloor database created and made public by W.H. Smith and D.T. Sandwell in 1997. The big advantage of this database is the uniformity of coverage, i.e. there is no large area where depths are missing. It has a spatial resolution of 2 arc minute. Another important effort is found in making regional, not global, seafloor databases with much finer resolutions in many countries. The Japan Hydrographic Department has compiled and released a 500m-grid topography database around Japan, J-EGG500, in 1999. Although the coverage of this database is only a small portion of the Earth, the database has been highly appreciated in the academic community, and accepted in surprise by the general public when the database was displayed in 3D imagery to show its quality. This database could be rather smoothly combined with the finer land DEM of 250m spatial resolution (Japan250m.grd, K. Kisimoto, 2000). One of the most important applications of this combined DEM of topography and bathymetry is tsunami modeling. Understanding of the coastal environment, management and development of the coastal region are other fields in need of these data. There is, however, an important issue to consider when we create a combined DEM of topography and bathymetry in finer resolutions. The problem arises from the discrepancy of the standard datum planes or reference levels used for topographic leveling and bathymetric sounding. Land topography (altitude) is defined by leveling from the single reference point determined by average mean sea level, in other words, land height is measured from the geoid. On the other hand, depth charts are made based on depth measured from locally determined reference sea surface level, and this value of sea surface level is taken from the long term average of the lowest tidal height. So, to create a combined DEM of topography and bathymetry in very fine scale, we need to avoid this inconsistency between height and depth across the coastal region. Height and depth should be physically continuous relative to a single reference datum across the coast within such new high resolution DEMs. (N.B. Coast line is not equal to 'altitude-zero line' nor 'depth-zero line'. It is defined locally as the long term average of the highest tide level.) All of this said, we still need a lot of work on the ocean side. Global coverage with detailed bathymetric mapping is still poor. Seafloor imaging and other geophysical measurements/experiments should be organized and conducted internationally and interdisciplinary ways more than ever. We always need greater technological advancement and application of this technology in marine sciences, and more enthusiastic minds of seagoing researchers as well. Recent seafloor mapping technology/quality both in bathymetry and imagery is very promising and even favorably compared with the terrain mapping. We discuss and present on recent achievement and needs on the seafloor mapping using several most up-to-date global- and regional- DEMs available for science community at the poster session.
Characterizing the genetic structure of a forensic DNA database using a latent variable approach.
Kruijver, Maarten
2016-07-01
Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A
2012-04-01
Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A study of the Immune Epitope Database for some fungi species using network topological indices.
Vázquez-Prieto, Severo; Paniagua, Esperanza; Solana, Hugo; Ubeira, Florencio M; González-Díaz, Humberto
2017-08-01
In the last years, the encryption of system structure information with different network topological indices has been a very active field of research. In the present study, we assembled for the first time a complex network using data obtained from the Immune Epitope Database for fungi species, and we then considered the general topology, the node degree distribution, and the local structure of this network. We also calculated eight node centrality measures for the observed network and compared it with three theoretical models. In view of the results obtained, we may expect that the present approach can become a valuable tool to explore the complexity of this database, as well as for the storage, manipulation, comparison, and retrieval of information contained therein.
Sooting turbulent jet flame: characterization and quantitative soot measurements
NASA Astrophysics Data System (ADS)
Köhler, M.; Geigle, K. P.; Meier, W.; Crosland, B. M.; Thomson, K. A.; Smallwood, G. J.
2011-08-01
Computational fluid dynamics (CFD) modelers require high-quality experimental data sets for validation of their numerical tools. Preferred features for numerical simulations of a sooting, turbulent test case flame are simplicity (no pilot flame), well-defined boundary conditions, and sufficient soot production. This paper proposes a non-premixed C2H4/air turbulent jet flame to fill this role and presents an extensive database for soot model validation. The sooting turbulent jet flame has a total visible flame length of approximately 400 mm and a fuel-jet Reynolds number of 10,000. The flame has a measured lift-off height of 26 mm which acts as a sensitive marker for CFD model validation, while this novel compiled experimental database of soot properties, temperature and velocity maps are useful for the validation of kinetic soot models and numerical flame simulations. Due to the relatively simple burner design which produces a flame with sufficient soot concentration while meeting modelers' needs with respect to boundary conditions and flame specifications as well as the present lack of a sooting "standard flame", this flame is suggested as a new reference turbulent sooting flame. The flame characterization presented here involved a variety of optical diagnostics including quantitative 2D laser-induced incandescence (2D-LII), shifted-vibrational coherent anti-Stokes Raman spectroscopy (SV-CARS), and particle image velocimetry (PIV). Producing an accurate and comprehensive characterization of a transient sooting flame was challenging and required optimization of these diagnostics. In this respect, we present the first simultaneous, instantaneous PIV, and LII measurements in a heavily sooting flame environment. Simultaneous soot and flow field measurements can provide new insights into the interaction between a turbulent vortex and flame chemistry, especially since soot structures in turbulent flames are known to be small and often treated in a statistical manner.
NASA Astrophysics Data System (ADS)
Trolliet, Mélodie; Wald, Lucien
2017-04-01
The solar radiation impinging at sea surface is an essential variable in climate system. There are several means to assess the daily irradiation at surface, such as pyranometers aboard ship or on buoys, meteorological re-analyses and satellite-derived databases. Among the latter, assessments made from the series of geostationary Meteosat satellites offer synoptic views of the tropical and equatorial Atlantic Ocean every 15 min with a spatial resolution of approximately 5 km. Such Meteosat-derived databases are fairly recent and the quality of the estimates of the daily irradiation must be established. Efforts have been made for the land masses and must be repeated for the Atlantic Ocean. The Prediction and Research Moored Array in the Tropical Atlantic (PIRATA) network of moorings in the Tropical Atlantic Ocean is considered as a reference for oceanographic data. It consists in 17 long-term Autonomous Temperature Line Acquisition System (ATLAS) buoys equipped with sensors to measure near-surface meteorological and subsurface oceanic parameters, including downward solar irradiation. Corrected downward solar daily irradiation from PIRATA were downloaded from the NOAA web site and were compared to several databases: CAMS RAD, HelioClim-1, HelioClim-3 v4 and HelioClim-3 v5. CAMS-RAD, the CAMS radiation service, combines products of the Copernicus Atmosphere Monitoring Service (CAMS) on gaseous content and aerosols in the atmosphere together with cloud optical properties deduced every 15 min from Meteosat imagery to supply estimates of the solar irradiation. Part of this service is the McClear clear sky model that provides estimates of the solar irradiation that should be observed in cloud-free conditions. The second and third databases are HelioClim-1 and HelioClim-3 v4 that are derived from Meteosat images using the Heliosat-2 method and the ESRA clear sky model, based on the Linke turbidity factor. HelioClim-3 v5 is the fourth database and differs from v4 by the partial use of McClear and CAMS products. HelioClim-1 covers the period 1985-2005, while the others start in 2004 and are updated daily. Deviations between PIRATA measurements and estimates were computed and summarized by usual statistics. Biases and root mean square errors differ from one database to the other. As a whole, the correlation coefficients are large, meaning that each database reproduces the day-to-day changes in irradiation well. These good results will support the development of a satellite-derived database of daily irradiation created by MINES ParisTech within the HelioClim project. The size of the cells will be 0.25°. HelioClim-1 and HelioClim-3v5 will be combined yielding a period coverage of 32 years, from 1985 to 2016, thus allowing analyses of long term variability of downward shortwave solar radiation over the Atlantic Ocean.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Beyond relevance and recall: testing new user-centred measures of database performance.
Stokes, Peter; Foster, Allen; Urquhart, Christine
2009-09-01
Measures of the effectiveness of databases have traditionally focused on recall, precision, with some debate on how relevance can be assessed, and by whom. New measures of database performance are required when users are familiar with search engines, and expect full text availability. This research ascertained which of four bibliographic databases (BNI, CINAHL, MEDLINE and EMBASE) could be considered most useful to nursing and midwifery students searching for information for an undergraduate dissertation. Searches on title were performed for dissertation topics supplied by nursing students (n = 9), who made the relevance judgements. Measures of recall and precision were combined with additional factors to provide measures of effectiveness, while efficiency combined measures of novelty and originality and accessibility combined measures for availability and retrievability, based on obtainability. There were significant differences among the databases in precision, originality and availability, but other differences were not significant (Friedman test). Odds ratio tests indicated that BNI, followed by CINAHL were the most effective, CINAHL the most efficient, and BNI the most accessible. The methodology could help library services in purchase decisions as the measure for accessibility, and odds ratio testing helped to differentiate database performance.
ForC: a global database of forest carbon stocks and fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Teixeira, Kristina J.; Wang, Maria M. H.; McGarvey, Jennifer C.
Forests play an influential role in the global carbon (C) cycle, storing roughly half of terrestrial C and annually exchanging with the atmosphere more than ten times the carbon dioxide (CO 2) emitted by anthropogenic activities. Yet, scaling up from ground-based measurements of forest C stocks and fluxes to understand global scale C cycling and its climate sensitivity remains an important challenge. Tens of thousands of forest C measurements have been made, but these data have yet to be integrated into a single database that makes them accessible for integrated analyses. Here we present an open-access global Forest Carbon databasemore » (ForC) containing records of ground-based measurements of ecosystem-level C stocks and annual fluxes, along with disturbance history and methodological information. ForC expands upon the previously published tropical portion of this database, TropForC (DOI: 10.5061/dryad.t516f), now including 17,538 records (previously 3568) representing 2,731 plots (previously 845) in 826 geographically distinct areas (previously 178). The database covers all forested biogeographic and climate zones, represents forest stands of all ages, and includes 89 C cycle variables collected between 1934 and 2015. We expect that ForC will prove useful for macroecological analyses of forest C cycling, for evaluation of model predictions or remote sensing products, for quantifying the contribution of forests to the global C cycle, and for supporting international efforts to inventory forest carbon and greenhouse gas exchange. A dynamic version of ForC-db is maintained at https://github.com/forc-db, and we encourage the research community to collaborate in updating, correcting, expanding, and utilizing this database.« less
CARINA data synthesis project: pH data scale unification and cruise adjustments
NASA Astrophysics Data System (ADS)
Velo, A.; Pérez, F. F.; Lin, X.; Key, R. M.; Tanhua, T.; de La Paz, M.; van Heuven, S.; Jutterström, S.; Ríos, A. F.
2009-10-01
Data on carbon and carbon-relevant hydrographic and hydrochemical parameters from previously non-publicly available cruise data sets in the Artic Mediterranean Seas (AMS), Atlantic and Southern Ocean have been retrieved and merged to a new database: CARINA (CARbon IN the Atlantic). These data have gone through rigorous quality control (QC) procedures to assure the highest possible quality and consistency. The data for most of the measured parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values, i.e. secondary quality control. Systematic biases found in the data have been corrected in the data products, i.e. three merged data files with measured, calculated and interpolated data for each of the three CARINA regions; AMS, Atlantic and Southern Ocean. Out of a total of 188 cruise entries in the CARINA database, 59 reported pH measured values. Here we present details of the secondary QC on pH for the CARINA database. Procedures of quality control, including crossover analysis between cruises and inversion analysis of all crossover data are briefly described. Adjustments were applied to the pH values for 21 of the cruises in the CARINA dataset. With these adjustments the CARINA database is consistent both internally as well as with GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s. Based on our analysis we estimate the internal accuracy of the CARINA pH data to be 0.005 pH units. The CARINA data are now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates and for model validation.
Asquith, William H.; Herrmann, George R.; Cleveland, Theodore G.
2013-01-01
A database containing more than 17,700 discharge values and ancillary hydraulic properties was assembled from summaries of discharge measurement records for 424 U.S. Geological Survey streamflow-gauging stations (stream gauges) in Texas. Each discharge exceeds the 90th-percentile daily mean streamflow as determined by period-of-record, stream-gauge-specific, flow-duration curves. Each discharge therefore is assumed to represent discharge measurement made during direct-runoff conditions. The hydraulic properties of each discharge measurement included concomitant cross-sectional flow area, water-surface top width, and reported mean velocity. Systematic and statewide investigation of these data in pursuit of regional models for the estimation of discharge and mean velocity has not been previously attempted. Generalized additive regression modeling is used to develop readily implemented procedures by end-users for estimation of discharge and mean velocity from select predictor variables at ungauged stream locations. The discharge model uses predictor variables of cross-sectional flow area, top width, stream location, mean annual precipitation, and a generalized terrain and climate index (OmegaEM) derived for a previous flood-frequency regionalization study. The mean velocity model uses predictor variables of discharge, top width, stream location, mean annual precipitation, and OmegaEM. The discharge model has an adjusted R-squared value of about 0.95 and a residual standard error (RSE) of about 0.22 base-10 logarithm (cubic meters per second); the mean velocity model has an adjusted R-squared value of about 0.67 and an RSE of about 0.063 fifth root (meters per second). Example applications and computations using both regression models are provided. - See more at: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29HE.1943-5584.0000635#sthash.jhGyPxgZ.dpuf
Development and applications of 3D-DIVIMP(HC) Monte Carlo impurity modeling code
NASA Astrophysics Data System (ADS)
Mu, Yarong
A self-contained gas injection system for the Divertor Material Evaluation System (DiMES) on DIII-D, the Porous Plug Injector (PPI), has been employed by A. McLean for in-situ study of chemical erosion in the tokamak divertor environment by injection of CH4. The principal contribution of the present thesis is a new interpretive code, 3D-DIVIMP(HC), which has been developed and successfully applied to the interpretation of the CH, C I, and C II emissions measured during the PPI experiments. The two principal types of experimental data which are compared here with 3D-DIVIMP(HC) code modeling are (a) absolute emissivities measured with a high resolution spectrometer, and (b) 2D filtered camera (TV) pictures taken from a view essentially straight down on the PPI. Incorporating the Janev-Reiter database for the breakup reactions of methane molecules in a plasma, 3D-DIVIMP(HC) is able to replicate these measurements to within the combined experimental and database uncertainties. It is therefore concluded that the basic elements of the physics and chemistry controlling the breakup of methane entering an attached divertor plasma have been identified and are incorporated in 3D-DIVIMP(HC).
Anechoic Chambers: Aerospace Applications. (Latest Citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1996-01-01
The bibliography contains citations concerning the design, development, performance, and applications of anechoic chambers in the aerospace industry. Anechoic chamber testing equipment, techniques for evaluation of aerodynamic noise, microwave and radio antennas, and other acoustic measurement devices are considered. Shock wave studies on aircraft models and components, electromagnetic measurements, jet flow studies, and antenna radiation pattern measurements for industrial and military aerospace equipment are discussed. (Contains 50-250 citations and includes a subject term index and title list.)
Anechoic Chambers: Aerospace Applications. (Latest Citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1995-01-01
The bibliography contains citations concerning the design, development, performance, and applications of anechoic chambers in the aerospace industry. Anechoic chamber testing equipment, techniques for evaluation of aerodynamic noise, microwave and radio antennas, and other acoustic measurement devices are considered. Shock wave studies on aircraft models and components, electromagnetic measurements, jet flow studies, and antenna radiation pattern measurements for industrial and military aerospace equipment are discussed. (Contains 50-250 citations and includes a subject term index and title list.)
Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry
2003-05-01
This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.
Chess databases as a research vehicle in psychology: Modeling large data.
Vaci, Nemanja; Bilalić, Merim
2017-08-01
The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.
SNPs selection using support vector regression and genetic algorithms in GWAS
2014-01-01
Introduction This paper proposes a new methodology to simultaneously select the most relevant SNPs markers for the characterization of any measurable phenotype described by a continuous variable using Support Vector Regression with Pearson Universal kernel as fitness function of a binary genetic algorithm. The proposed methodology is multi-attribute towards considering several markers simultaneously to explain the phenotype and is based jointly on statistical tools, machine learning and computational intelligence. Results The suggested method has shown potential in the simulated database 1, with additive effects only, and real database. In this simulated database, with a total of 1,000 markers, and 7 with major effect on the phenotype and the other 993 SNPs representing the noise, the method identified 21 markers. Of this total, 5 are relevant SNPs between the 7 but 16 are false positives. In real database, initially with 50,752 SNPs, we have reduced to 3,073 markers, increasing the accuracy of the model. In the simulated database 2, with additive effects and interactions (epistasis), the proposed method matched to the methodology most commonly used in GWAS. Conclusions The method suggested in this paper demonstrates the effectiveness in explaining the real phenotype (PTA for milk), because with the application of the wrapper based on genetic algorithm and Support Vector Regression with Pearson Universal, many redundant markers were eliminated, increasing the prediction and accuracy of the model on the real database without quality control filters. The PUK demonstrated that it can replicate the performance of linear and RBF kernels. PMID:25573332
Zhao, Lei; Guo, Yi; Wang, Wei; Yan, Li-juan
2011-08-01
To evaluate the effectiveness of acupuncture as a treatment for neurovascular headache and to analyze the current situation related to acupuncture treatment. PubMed database (1966-2010), EMBASE database (1986-2010), Cochrane Library (Issue 1, 2010), Chinese Biomedical Literature Database (1979-2010), China HowNet Knowledge Database (1979-2010), VIP Journals Database (1989-2010), and Wanfang database (1998-2010) were retrieved. Randomized or quasi-randomized controlled studies were included. The priority was given to high-quality randomized, controlled trials. Statistical outcome indicators were measured using RevMan 5.0.20 software. A total of 16 articles and 1 535 cases were included. Meta-analysis showed a significant difference between the acupuncture therapy and Western medicine therapy [combined RR (random efficacy model)=1.46, 95% CI (1.21, 1.75), Z=3.96, P<0.0001], indicating an obvious superior effect of the acupuncture therapy; significant difference also existed between the comprehensive acupuncture therapy and acupuncture therapy alone [combined RR (fixed efficacy model)=3.35, 95% CI (1.92, 5.82), Z=4.28, P<0.0001], indicating that acupuncture combined with other therapies, such as points injection, scalp acupuncture, auricular acupuncture, etc., were superior to the conventional body acupuncture therapy alone. The inclusion of limited clinical studies had verified the efficacy of acupuncture in the treatment of neurovascular headache. Although acupuncture or its combined therapies provides certain advantages, most clinical studies are of small sample sizes. Large sample size, randomized, controlled trials are needed in the future for more definitive results.
Storage Optimization of Educational System Data
ERIC Educational Resources Information Center
Boja, Catalin
2006-01-01
There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…
Fairman, Kathleen A; Motheral, Brenda R
2003-01-01
Pharmacoeconomic models of Helicobacter (H) pylori eradication have been frequently cited but never validated. Examine retrospectively whether H pylori pharmacoeconomic models direct decision makers to cost-effective therapeutic choices. We first replicated and then validated 2 models, replacing model assumptions with empirical data from a multipayer claims database. Database subjects were 435 commercially insured U.S. patients treated with bismuthmetronidazole- tetracycline (BMT), proton pump inhibitor (PPI)-clarithromycin, or PPI-amoxicillin. Patients met >1 clinical requirement (ulcer disease, gastritis/duodenitis, stomach function disorder, abdominal pain, H pylori infection, endoscopy, or H pylori assay). Sensitivity analyses included only patients with ulcer diagnosis or gastrointestinal specialist care. Outcome measures were: (1) rates of eradication retreatment; (2) use of office visits, hospitalizations, endoscopies, and antisecretory medication; and (3) cost per effectively treated (nonretreated) patient. Model results overstated the cost-effectiveness of PPI-clarithromycin and underestimated the cost-effectiveness of BMT. Prior to empirical adjustment, costs per effectively treated patient were 1,001 US dollars, 980 US dollars, and 1,730 US dollars for BMT, PPIclarithromycin, and PPI-amoxicillin, respectively. Estimates after adjustment were US dollars for BMT, 1,118 US dollars for PPI-clarithromycin, and 1,131 US dollars for PPI-amoxicillin. Key model assumptions that proved retrospectively incorrect were largely unsupported by either empirical evidence or systematic assessment of expert opinion. Organizations with access to medical and pharmacy claims databases should test key assumptions of influential models to determine their validity. Journal peer-review processes should pay particular attention to the basis of model assumptions.
GIS model for identifying urban areas vulnerable to noise pollution: case study
NASA Astrophysics Data System (ADS)
Bilaşco, Ştefan; Govor, Corina; Roşca, Sanda; Vescan, Iuliu; Filip, Sorin; Fodorean, Ioan
2017-04-01
The unprecedented expansion of the national car ownership over the last few years has been determined by economic growth and the need for the population and economic agents to reduce travel time in progressively expanding large urban centres. This has led to an increase in the level of road noise and a stronger impact on the quality of the environment. Noise pollution generated by means of transport represents one of the most important types of pollution with negative effects on a population's health in large urban areas. As a consequence, tolerable limits of sound intensity for the comfort of inhabitants have been determined worldwide and the generation of sound maps has been made compulsory in order to identify the vulnerable zones and to make recommendations how to decrease the negative impact on humans. In this context, the present study aims at presenting a GIS spatial analysis model-based methodology for identifying and mapping zones vulnerable to noise pollution. The developed GIS model is based on the analysis of all the components influencing sound propagation, represented as vector databases (points of sound intensity measurements, buildings, lands use, transport infrastructure), raster databases (DEM), and numerical databases (wind direction and speed, sound intensity). Secondly, the hourly changes (for representative hours) were analysed to identify the hotspots characterised by major traffic flows specific to rush hours. The validated results of the model are represented by GIS databases and useful maps for the local public administration to use as a source of information and in the process of making decisions.
Empirical study of fuzzy compatibility measures and aggregation operators
NASA Astrophysics Data System (ADS)
Cross, Valerie V.; Sudkamp, Thomas A.
1992-02-01
Two fundamental requirements for the generation of support using incomplete and imprecise information are the ability to measure the compatibility of discriminatory information with domain knowledge and the ability to fuse information obtained from disparate sources. A generic architecture utilizing the generalized fuzzy relational database model has been developed to empirically investigate the support generation capabilities of various compatibility measures and aggregation operators. This paper examines the effectiveness of combinations of compatibility measures from the set-theoretic, geometric distance, and logic- based classes paired with t-norm and generalized mean families of aggregation operators.
NASA Technical Reports Server (NTRS)
Walls, Laurie K.; Kirk, Daniel; deLuis, Kavier; Haberbusch, Mark S.
2011-01-01
As space programs increasingly investigate various options for long duration space missions the accurate prediction of propellant behavior over long periods of time in microgravity environment has become increasingly imperative. This has driven the development of a detailed, physics-based understanding of slosh behavior of cryogenic propellants over a range of conditions and environments that are relevant for rocket and space storage applications. Recent advancements in computational fluid dynamics (CFD) models and hardware capabilities have enabled the modeling of complex fluid behavior in microgravity environment. Historically, launch vehicles with moderate duration upper stage coast periods have contained very limited instrumentation to quantify propellant stratification and boil-off in these environments, thus the ability to benchmark these complex computational models is of great consequence. To benchmark enhanced CFD models, recent work focuses on establishing an extensive experimental database of liquid slosh under a wide range of relevant conditions. In addition, a mass gauging system specifically designed to provide high fidelity measurements for both liquid stratification and liquid/ullage position in a micro-gravity environment has been developed. This pUblication will summarize the various experimental programs established to produce this comprehensive database and unique flight measurement techniques.
Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications
Kos, Anton; Tomažič, Sašo; Umek, Anton
2016-01-01
Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
Temsch, W; Luger, A; Riedl, M
2008-01-01
This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Modeling and predicting low-speed vehicle emissions as a function of driving kinematics.
Hao, Lijun; Chen, Wei; Li, Lei; Tan, Jianwei; Wang, Xin; Yin, Hang; Ding, Yan; Ge, Yunshan
2017-05-01
An instantaneous emission model was developed to model and predict the real driving emissions of the low-speed vehicles. The emission database used in the model was measured by using portable emission measurement system (PEMS) under actual traffic conditions in the rural area, and the characteristics of the emission data were determined in relation to the driving kinematics (speed and acceleration) of the low-speed vehicle. The input of the emission model is driving cycle, and the model requires instantaneous vehicle speed and acceleration levels as input variables and uses them to interpolate the pollutant emission rate maps to calculate the transient pollutant emission rates, which will be accumulated to calculate the total emissions released during the whole driving cycle. And the vehicle fuel consumption was determined through the carbon balance method. The model predicted the emissions and fuel consumption of an in-use low-speed vehicle type model, which agreed well with the measured data. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Cole, Ryan Kenneth; Schroeder, Paul James; Diego Draper, Anthony; Rieker, Gregory Brian
2018-06-01
Modelling absorption spectra in high pressure, high temperature environments is complicated by the increased relevance of higher order collisional phenomena (e.g. line mixing, collision-induced absorption, finite duration of collisions) that alter the spectral lineshape. Accurate reference spectroscopy in these conditions is of interest for mineralogy and radiative transfer studies of Venus as well as other dense planetary atmospheres. We present a new, high pressure, high temperature absorption spectroscopy facility at the University of Colorado Boulder. This facility employs a dual frequency comb absorption spectrometer to record broadband (500nm), high resolution (~0.002nm) spectra in conditions comparable to the Venus surface (730K, 90bar). Measurements of the near-infrared spectrum of carbon dioxide at high pressure and temperature will be compared to modeled spectra extrapolated from the HITRAN 2016 database as well as other published models that include additional collisional physics. This comparison gives insight into the effectiveness of existing absorption databases for modeling the lower Venus atmosphere as well as the need to expand absorption models to suit these conditions.
NASA Astrophysics Data System (ADS)
Ringerud, S.; Skofronick Jackson, G.; Kulie, M.; Randel, D.
2016-12-01
NASA's Global Precipitation Measurement Mission (GPM) provides a wealth of both active and passive microwave observations aimed at furthering understanding of global precipitation and the hydrologic cycle. Employing a constellation of passive microwave radiometers increases global coverage and sampling, while the core satellite acts as a transfer standard, enabling consistent retrievals across individual constellation members. The transfer standard is applied in the form of a physically based a priori database constructed for use in Bayesian retrieval algorithms for each radiometer. The database is constructed using hydrometeor profiles optimized for the best fit to simultaneous active/passive core satellite measurements via the GPM Combined Algorithm. Initial validation of GPM rainfall products using the combined database suggests high retrieval errors for convective precipitation over land and at high latitudes. In such regimes, the signal from ice scattering observed at the higher microwave frequencies becomes particularly important for detecting and retrieving precipitation. For cross-track sounders such as MHS and SAPHIR, this signal is crucial. It is therefore important that the scattering signals associated with precipitation are accurately represented and modeled in the retrieval database. In the current GPM combined retrieval and constellation databases, ice hydrometeors are represented as "fluffy spheres", with assumed density and scattering parameters calculated using Mie theory. Resulting simulated Tb agree reasonably well at frequencies up to 89 GHz, but show significant biases at higher frequencies. In this work the database is recreated using an ensemble of non-spherical ice particles with single scattering properties calculated using discrete dipole approximation. Simulated Tb agreement is significantly improved across the high frequencies, decreasing biases by an order of magnitude in several of the channels. The new database is applied for a sample of GPM constellation retrievals and the retrieved precipitation rates compared, to demonstrate areas where the use of more complex ice particles will have the greatest effect upon the final retrievals.
Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)
NASA Astrophysics Data System (ADS)
Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul
2016-09-01
Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.
NoSQL technologies for the CMS Conditions Database
NASA Astrophysics Data System (ADS)
Sipos, Roland
2015-12-01
With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.
Multispectrum analysis of the oxygen A-band.
Drouin, Brian J; Benner, D Chris; Brown, Linda R; Cich, Matthew J; Crawford, Timothy J; Devi, V Malathy; Guillaume, Alexander; Hodges, Joseph T; Mlawer, Eli J; Robichaud, David J; Oyafuso, Fabiano; Payne, Vivienne H; Sung, Keeyoon; Wishnow, Edward H; Yu, Shanshan
2017-01-01
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O 2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.
Multispectrum analysis of the oxygen A-band
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; Cich, Matthew J.; Crawford, Timothy J.; Devi, V. Malathy; Guillaume, Alexander; Hodges, Joseph T.; Mlawer, Eli J.; Robichaud, David J.; Oyafuso, Fabiano; Payne, Vivienne H.; Sung, Keeyoon; Wishnow, Edward H.; Yu, Shanshan
2016-01-01
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database. PMID:27840454
Multispectrum analysis of the oxygen A-band
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; ...
2016-04-11
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less
Multispectrum analysis of the oxygen A-band
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less
The implementation of non-Voigt line profiles in the HITRAN database: H2 case study
NASA Astrophysics Data System (ADS)
Wcisło, P.; Gordon, I. E.; Tran, H.; Tan, Y.; Hu, S.-M.; Campargue, A.; Kassi, S.; Romanini, D.; Hill, C.; Kochanov, R. V.; Rothman, L. S.
2016-07-01
Experimental capabilities of molecular spectroscopy and its applications nowadays require a sub-percent or even sub-per mille accuracy of the representation of the shapes of molecular transitions. This implies the necessity of using more advanced line-shape models which are characterized by many more parameters than a simple Voigt profile. It is a great challenge for modern molecular spectral databases to store and maintain the extended set of line-shape parameters as well as their temperature dependences. It is even more challenging to reliably retrieve these parameters from experimental spectra over a large range of pressures and temperatures. In this paper we address this problem starting from the case of the H2 molecule for which the non-Voigt line-shape effects are exceptionally pronounced. For this purpose we reanalyzed the experimental data reported in the literature. In particular, we performed detailed line-shape analysis of high-quality spectra obtained with cavity-enhanced techniques. We also report the first high-quality cavity-enhanced measurement of the H2 fundamental vibrational mode. We develop a correction to the Hartmann-Tran profile (HTP) which adjusts the HTP to the particular model of the velocity-changing collisions. This allows the measured spectra to be better represented over a wide range of pressures. The problem of storing the HTP parameters in the HITRAN database together with their temperature dependences is also discussed.
The radiopurity.org material database
NASA Astrophysics Data System (ADS)
Cooley, J.; Loach, J. C.; Poon, A. W. P.
2018-01-01
The database at http://www.radiopurity.org is the world's largest public database of material radio-purity mea-surements. These measurements are used by members of the low-background physics community to build experiments that search for neutrinos, neutrinoless double-beta decay, WIMP dark matter, and other exciting physics. This paper summarizes the current status and the future plan of this database.
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi
2012-01-01
Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606
Assessing efficiency of software production for NASA-SEL data
NASA Technical Reports Server (NTRS)
Vonmayrhauser, Anneliese; Roeseler, Armin
1993-01-01
This paper uses production models to identify and quantify efficient allocation of resources and key drivers of software productivity for project data in the NASA-SEL database. While analysis allows identification of efficient projects, many of the metrics that could have provided a more detailed analysis are not at a level of measurement to allow production model analysis. Production models must be used with proper parameterization to be successful. This may mean a new look at which metrics are helpful for efficiency assessment.
2012-01-01
Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057
pE-DB: a database of structural ensembles of intrinsically disordered and of unfolded proteins.
Varadi, Mihaly; Kosol, Simone; Lebrun, Pierre; Valentini, Erica; Blackledge, Martin; Dunker, A Keith; Felli, Isabella C; Forman-Kay, Julie D; Kriwacki, Richard W; Pierattelli, Roberta; Sussman, Joel; Svergun, Dmitri I; Uversky, Vladimir N; Vendruscolo, Michele; Wishart, David; Wright, Peter E; Tompa, Peter
2014-01-01
The goal of pE-DB (http://pedb.vib.be) is to serve as an openly accessible database for the deposition of structural ensembles of intrinsically disordered proteins (IDPs) and of denatured proteins based on nuclear magnetic resonance spectroscopy, small-angle X-ray scattering and other data measured in solution. Owing to the inherent flexibility of IDPs, solution techniques are particularly appropriate for characterizing their biophysical properties, and structural ensembles in agreement with these data provide a convenient tool for describing the underlying conformational sampling. Database entries consist of (i) primary experimental data with descriptions of the acquisition methods and algorithms used for the ensemble calculations, and (ii) the structural ensembles consistent with these data, provided as a set of models in a Protein Data Bank format. PE-DB is open for submissions from the community, and is intended as a forum for disseminating the structural ensembles and the methodologies used to generate them. While the need to represent the IDP structures is clear, methods for determining and evaluating the structural ensembles are still evolving. The availability of the pE-DB database is expected to promote the development of new modeling methods and leads to a better understanding of how function arises from disordered states.
Advancing Consumer Product Composition and Chemical ...
This presentation describes EPA efforts to collect, model, and measure publically available consumer product data for use in exposure assessment. The development of the ORD Chemicals and Products database will be described, as will machine-learning based models for predicting chemical function. Finally, the talk describes new mass spectrometry-based methods for measuring chemicals in formulation and articles. This presentation is an invited talk to the ICCA-LRI workshop "Fit-For-Purpose Exposure Assessments For Risk-Based Decision Making". The talk will share EPA efforts to characterize the components of consumer products for use in exposure assessment with the international exposure science community.
NASA Astrophysics Data System (ADS)
Turkoglu, Danyal
Precise knowledge of prompt gamma-ray intensities following neutron capture is critical for elemental and isotopic analyses, homeland security, modeling nuclear reactors, etc. A recently-developed database of prompt gamma-ray production cross sections and nuclear structure information in the form of a decay scheme, called the Evaluated Gamma-ray Activation File (EGAF), is under revision. Statistical model calculations are useful for checking the consistency of the decay scheme, providing insight on its completeness and accuracy. Furthermore, these statistical model calculations are necessary to estimate the contribution of continuum gamma-rays, which cannot be experimentally resolved due to the high density of excited states in medium- and heavy-mass nuclei. Decay-scheme improvements in EGAF lead to improvements to other databases (Evaluated Nuclear Structure Data File, Reference Input Parameter Library) that are ultimately used in nuclear-reaction models to generate the Evaluated Nuclear Data File (ENDF). Gamma-ray transitions following neutron capture in 93Nb have been studied at the cold-neutron beam facility at the Budapest Research Reactor. Measurements have been performed using a coaxial HPGe detector with Compton suppression. Partial gamma-ray production capture cross sections at a neutron velocity of 2200 m/s have been deduced relative to that of the 255.9-keV transition after cold-neutron capture by 93Nb. With the measurement of a niobium chloride target, this partial cross section was internally standardized to the cross section for the 1951-keV transition after cold-neutron capture by 35Cl. The resulting (0.1377 +/- 0.0018) barn (b) partial cross section produced a calibration factor that was 23% lower than previously measured for the EGAF database. The thermal-neutron cross sections were deduced for the 93Nb(n,gamma ) 94mNb and 93Nb(n,gamma) 94gNb reactions by summing the experimentally-measured partial gamma-ray production cross sections associated with the ground-state transitions below the 396-keV level and combining that summation with the contribution to the ground state from the quasi-continuum above 396 keV, determined with Monte Carlo statistical model calculations using the DICEBOX computer code. These values, sigmam and sigma 0, were (0.83 +/- 0.05) b and (1.16 +/- 0.11) b, respectively, and found to be in agreement with literature values. Comparison of the modeled population and experimental depopulation of individual levels confirmed tentative spin assignments and suggested changes where imbalances existed.
Reconsidering the conceptualization of nursing workload: literature review.
Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey
2007-03-01
This paper reports a literature review that aimed to analyse the way in which nursing intensity and patient dependency have been considered to be conceptually similar to nursing workload, and to propose a model to show how these concepts actually differ in both theoretical and practical terms. The literature on nursing workload considers the concepts of patient 'dependency' and nursing 'intensity' in the realm of nursing workload. These concepts differ by definition but are used to measure the same phenomenon, i.e. nursing workload. The literature search was undertaken in 2004 using electronic databases, reference lists and other available literature. Papers were sourced from the Medline, Psychlit, CINAHL and Cochrane databases and through the general search engine Google. The keywords focussed on nursing workload, nursing intensity and patient dependency. Nursing work and workload concepts and labels are defined and measured in different and often contradictory ways. It is vitally important to understand these differences when using such conceptualizations to measure nursing workload. A preliminary model is put forward to clarify the relationships between nursing workload concepts. In presenting a preliminary model of nursing workload, it is hoped that nursing workload might be better understood so that it becomes more visible and recognizable. Increasing the visibility of nursing workload should have a positive impact on nursing workload management and on the provision of patient care.
Fenelon, Joseph M.
2006-01-01
More than 1,200 water-level measurements from 1957 to 2005 in the Rainier Mesa area of the Nevada Test Site were quality assured and analyzed. Water levels were measured from 50 discrete intervals within 18 boreholes and from 4 tunnel sites. An interpretive database was constructed that describes water-level conditions for each water level measured in the Rainier Mesa area. Multiple attributes were assigned to each water-level measurement in the database to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed for each water-level measurement. The database also includes hydrograph narratives that describe the water-level history of each well.
Pharmacophore Modelling and Synthesis of Quinoline-3-Carbohydrazide as Antioxidants
El Bakkali, Mustapha; Ismaili, Lhassane; Tomassoli, Isabelle; Nicod, Laurence; Pudlo, Marc; Refouvelet, Bernard
2011-01-01
From well-known antioxidants agents, we developed a first pharmacophore model containing four common chemical features: one aromatic ring and three hydrogen bond acceptors. This model served as a template in virtual screening of Maybridge and NCI databases that resulted in selection of sixteen compounds. The selected compounds showed a good antioxidant activity measured by three chemical tests: DPPH radical, OH° radical, and superoxide radical scavenging. New synthetic compounds with a good correlation with the model were prepared, and some of them presented a good antioxidant activity. PMID:25954520
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.
Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z
Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.
Ablation Predictions for Carbonaceous Materials Using Two Databases for Species Thermodynamics
NASA Technical Reports Server (NTRS)
Milos, F. S.; Chen, Y.-K.
2013-01-01
During previous work at NASA Ames Research Center, most ablation predictions were obtained using a species thermodynamics database derived primarily from the JANAF thermochemical tables. However, the chemical equilibrium with applications thermodynamics database, also used by NASA, is considered more up to date. In this work, ablation analyses were performed for carbon and carbon phenolic materials using both sets of species thermodynamics. The ablation predictions are comparable at low and moderate heat fluxes, where the dominant mechanism is carbon oxidation. For high heat fluxes where sublimation is important, the predictions differ, with the chemical equilibrium with applications model predicting a lower ablation rate. The disagreement is greater for carbon phenolic than for carbon, and this difference is attributed to hydrocarbon species that may contribute to the ablation rate. Sample calculations for representative Orion and Stardust environments show significant differences only in the sublimation regime. For Stardust, if the calculations include a nominal environmental uncertainty for aeroheating, then the chemical equilibrium with applications model predicts a range of recession that is consistent with measurements for both heatshield cores.
The Application of Lidar to Synthetic Vision System Integrity
NASA Technical Reports Server (NTRS)
Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve
2003-01-01
One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.
Burstyn, I; Kromhout, H; Cruise, P J; Brennan, P
2000-01-01
The objective of this project was to construct a database of exposure measurements which would be used to retrospectively assess the intensity of various exposures in an epidemiological study of cancer risk among asphalt workers. The database was developed as a stand-alone Microsoft Access 2.0 application, which could work in each of the national centres. Exposure data included in the database comprised measurements of exposure levels, plus supplementary information on production characteristics which was analogous to that used to describe companies enrolled in the study. The database has been successfully implemented in eight countries, demonstrating the flexibility and data security features adequate to the task. The database allowed retrieval and consistent coding of 38 data sets of which 34 have never been described in peer-reviewed scientific literature. We were able to collect most of the data intended. As of February 1999 the database consisted of 2007 sets of measurements from persons or locations. The measurements appeared to be free from any obvious bias. The methodology embodied in the creation of the database can be usefully employed to develop exposure assessment tools in epidemiological studies.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2012-09-01
A quantitative determinants-of-exposure analysis of respirable crystalline silica (RCS) levels in the construction industry was performed using a database compiled from an extensive literature review. Statistical models were developed to predict work-shift exposure levels by trade. Monte Carlo simulation was used to recreate exposures derived from summarized measurements which were combined with single measurements for analysis. Modeling was performed using Tobit models within a multimodel inference framework, with year, sampling duration, type of environment, project purpose, project type, sampling strategy and use of exposure controls as potential predictors. 1346 RCS measurements were included in the analysis, of which 318 were non-detects and 228 were simulated from summary statistics. The model containing all the variables explained 22% of total variability. Apart from trade, sampling duration, year and strategy were the most influential predictors of RCS levels. The use of exposure controls was associated with an average decrease of 19% in exposure levels compared to none, and increased concentrations were found for industrial, demolition and renovation projects. Predicted geometric means for year 1999 were the highest for drilling rig operators (0.238 mg m(-3)) and tunnel construction workers (0.224 mg m(-3)), while the estimated exceedance fraction of the ACGIH TLV by trade ranged from 47% to 91%. The predicted geometric means in this study indicated important overexposure compared to the TLV. However, the low proportion of variability explained by the models suggests that the construction trade is only a moderate predictor of work-shift exposure levels. The impact of the different tasks performed during a work shift should also be assessed to provide better management and control of RCS exposure levels on construction sites.
Data Model for Multi Hazard Risk Assessment Spatial Support Decision System
NASA Astrophysics Data System (ADS)
Andrejchenko, Vera; Bakker, Wim; van Westen, Cees
2014-05-01
The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The data model includes data-structures for CBA and SMCE. The model is at the stage where risk and cost-benefit calculations can be stored but the remaining part is currently under development. Multi-criteria information, user management and the relation of these with the rest of the model is our next step. Having a carefully designed data model plays a crucial role in the development of the whole system for rapid development, keeping the data consistent, and in the end, support the end-user in making good decisions in risk-reduction measures related to multiple natural hazards. This work is part of the EU FP7 Marie Curie ITN "CHANGES"project (www.changes-itn.edu)
Air traffic control specialist performance measurement database.
DOT National Transportation Integrated Search
1999-06-01
The Air Traffic Control Specialist (ATCS) Performance Measurement Database is a compilation of performance measures and : measurement techniques that researchers have used. It may be applicable to other human factor research related to air traffic co...
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
2014-10-01
variability with well trained readers. Figure 7: comparison between the PD (percent density using Cumulus area) and the automatic PD. The...evaluation of outlier correction, comparison of several different software methods, precision measurement, and evaluation of variation by mammography...chart review for selected cases (month 4-6). Comparison of information from the Breast Cancer Database and medical records showed good consistency
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Use of FIA plot data in the LANDFIRE project
Chris Toney; Matthew Rollins; Karen Short; Tracey Frescino; Ronald Tymcio; Birgit Peterson
2007-01-01
LANDFIRE is an interagency project that will generate consistent maps and data describing vegetation, fire, and fuel characteristics across the United States within a 5-year timeframe. Modeling and mapping in LANDFIRE depend extensively on a large database of georeferenced field measurements describing vegetation, site characteristics, and fuel. The LANDFIRE Reference...
USDA-ARS?s Scientific Manuscript database
High frequency in situ measurements of nitrate can greatly reduce the uncertainty in nitrate flux estimates. Water quality databases maintained by various federal and state agencies often consist of pollutant concentration data obtained from periodic grab samples collected from gauged reaches of a s...
ERIC Educational Resources Information Center
Gresham, Frank M.; Dart, Evan H.; Collins, Tai A.
2017-01-01
The concept of treatment integrity is an essential component to databased decision making within a response-to-intervention model. Although treatment integrity is a topic receiving increased attention in the school-based intervention literature, relatively few studies have been conducted regarding the technical adequacy of treatment integrity…
Badhwar - O'Neill 2014 Galactic Cosmic Ray Flux Model Description
NASA Technical Reports Server (NTRS)
O'Neill, P. M.; Golge, S.; Slaba, T. C.
2014-01-01
The Badhwar-O'Neill (BON) Galactic Cosmic Ray (GCR) model is based on GCR measurements from particle detectors. The model has mainly been used by NASA to certify microelectronic systems and the analysis of radiation health risks to astronauts in space missions. The BON14 model numerically solves the Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration under the assumption of a spherically symmetric heliosphere. The model also incorporates an empirical time delay function to account for the lag of the solar activity to reach the boundary of the heliosphere. This technical paper describes the most recent improvements in parameter fits to the BON model (BON14). Using a comprehensive measurement database, it is shown that BON14 is significantly improved over the previous version, BON11.
High-Temperature Cast Aluminum for Efficient Engines
NASA Astrophysics Data System (ADS)
Bobel, Andrew C.
Accurate thermodynamic databases are the foundation of predictive microstructure and property models. An initial assessment of the commercially available Thermo-Calc TCAL2 database and the proprietary aluminum database of QuesTek demonstrated a large degree of deviation with respect to equilibrium precipitate phase prediction in the compositional region of interest when compared to 3-D atom probe tomography (3DAPT) and transmission electron microscopy (TEM) experimental results. New compositional measurements of the Q-phase (Al-Cu-Mg-Si phase) led to a remodeling of the Q-phase thermodynamic description in the CALPHAD databases which has produced significant improvements in the phase prediction capabilities of the thermodynamic model. Due to the unique morphologies of strengthening precipitate phases commonly utilized in high-strength cast aluminum alloys, the development of new microstructural evolution models to describe both rod and plate particle growth was critical for accurate mechanistic strength models which rely heavily on precipitate size and shape. Particle size measurements through both 3DAPT and TEM experiments were used in conjunction with literature results of many alloy compositions to develop a physical growth model for the independent prediction of rod radii and rod length evolution. In addition a machine learning (ML) model was developed for the independent prediction of plate thickness and plate diameter evolution as a function of alloy composition, aging temperature, and aging time. The developed models are then compared with physical growth laws developed for spheres and modified for ellipsoidal morphology effects. Analysis of the effect of particle morphology on strength enhancement has been undertaken by modification of the Orowan-Ashby equation for 〈110〉 alpha-Al oriented finite rods in addition to an appropriate version for similarly oriented plates. A mechanistic strengthening model was developed for cast aluminum alloys containing both rod and plate-like precipitates. The model accurately accounts for the temperature dependence of particle nucleation and growth, solid solution strengthening, Si eutectic strength, and base aluminum yield strength. Strengthening model predictions of tensile yield strength are in excellent agreement with experimental observations over a wide range of aluminum alloy systems, aging temperatures, and test conditions. The developed models enable the prediction of the required particle morphology and volume fraction necessary to achieve target property goals in the design of future aluminum alloys. The effect of partitioning elements to the Q-phase was also considered for the potential to control the nucleation rate, reduce coarsening, and control the evolution of particle morphology. Elements were selected based on density functional theory (DFT) calculations showing the prevalence of certain elements to partition to the Q-phase. 3DAPT experiments were performed on Q-phase containing wrought alloys with these additions and show segregation of certain elements to the Q-phase with relative agreement to DFT predictions.
QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.
Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V
2015-07-27
Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.
Bifactor model of WISC-IV: Applicability and measurement invariance in low and normal IQ groups.
Gomez, Rapson; Vance, Alasdair; Watson, Shaun
2017-07-01
This study examined the applicability and measurement invariance of the bifactor model of the 10 Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) core subtests in groups of children and adolescents (age range from 6 to 16 years) with low (IQ ≤79; N = 229; % male = 75.9) and normal (IQ ≥80; N = 816; % male = 75.0) IQ scores. Results supported this model in both groups, and there was good support for measurement invariance for this model across these groups. For all participants together, the omega hierarchical and explained common variance (ECV) values were high for the general factor and low to negligible for the specific factors. Together, the findings favor the use of the Full Scale IQ (FSIQ) scores of the WISC-IV, but not the subscale index scores. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
NASA Technical Reports Server (NTRS)
Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei
2011-01-01
This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.
Rossa, Carlos; Lehmann, Thomas; Sloboda, Ronald; Usmani, Nawaid; Tavakoli, Mahdi
2017-08-01
Global modelling has traditionally been the approach taken to estimate needle deflection in soft tissue. In this paper, we propose a new method based on local data-driven modelling of needle deflection. External measurement of needle-tissue interactions is collected from several insertions in ex vivo tissue to form a cloud of data. Inputs to the system are the needle insertion depth, axial rotations, and the forces and torques measured at the needle base by a force sensor. When a new insertion is performed, the just-in-time learning method estimates the model outputs given the current inputs to the needle-tissue system and the historical database. The query is compared to every observation in the database and is given weights according to some similarity criteria. Only a subset of historical data that is most relevant to the query is selected and a local linear model is fit to the selected points to estimate the query output. The model outputs the 3D deflection of the needle tip and the needle insertion force. The proposed approach is validated in ex vivo multilayered biological tissue in different needle insertion scenarios. Experimental results in five different case studies indicate an accuracy in predicting needle deflection of 0.81 and 1.24 mm in the horizontal and vertical lanes, respectively, and an accuracy of 0.5 N in predicting the needle insertion force over 216 needle insertions.
A novel methodology for interpreting air quality measurements from urban streets using CFD modelling
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming
2011-09-01
In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population exposure studies.
77 FR 38277 - Wind and Water Power Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
..., modeling, and database efforts. This meeting will be a technical discussion to provide those involved in... ecological survey, modeling, and database efforts in the waters off the Mid-Atlantic. The workshop aims to... models and compatible Federal and regional databases. It is not the object of this session to obtain any...
System, method and apparatus for conducting a keyterm search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
System, method and apparatus for conducting a phrase search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
Evaluation of low wind modeling approaches for two tall-stack databases.
Paine, Robert; Samani, Olga; Kaplan, Mary; Knipping, Eladio; Kumar, Naresh
2015-11-01
The performance of the AERMOD air dispersion model under low wind speed conditions, especially for applications with only one level of meteorological data and no direct turbulence measurements or vertical temperature gradient observations, is the focus of this study. The analysis documented in this paper addresses evaluations for low wind conditions involving tall stack releases for which multiple years of concurrent emissions, meteorological data, and monitoring data are available. AERMOD was tested on two field-study databases involving several SO2 monitors and hourly emissions data that had sub-hourly meteorological data (e.g., 10-min averages) available using several technical options: default mode, with various low wind speed beta options, and using the available sub-hourly meteorological data. These field study databases included (1) Mercer County, a North Dakota database featuring five SO2 monitors within 10 km of the Dakota Gasification Company's plant and the Antelope Valley Station power plant in an area of both flat and elevated terrain, and (2) a flat-terrain setting database with four SO2 monitors within 6 km of the Gibson Generating Station in southwest Indiana. Both sites featured regionally representative 10-m meteorological databases, with no significant terrain obstacles between the meteorological site and the emission sources. The low wind beta options show improvement in model performance helping to reduce some of the over-prediction biases currently present in AERMOD when run with regulatory default options. The overall findings with the low wind speed testing on these tall stack field-study databases indicate that AERMOD low wind speed options have a minor effect for flat terrain locations, but can have a significant effect for elevated terrain locations. The performance of AERMOD using low wind speed options leads to improved consistency of meteorological conditions associated with the highest observed and predicted concentration events. The available sub-hourly modeling results using the Sub-Hourly AERMOD Run Procedure (SHARP) are relatively unbiased and show that this alternative approach should be seriously considered to address situations dominated by low-wind meander conditions. AERMOD was evaluated with two tall stack databases (in North Dakota and Indiana) in areas of both flat and elevated terrain. AERMOD cases included the regulatory default mode, low wind speed beta options, and use of the Sub-Hourly AERMOD Run Procedure (SHARP). The low wind beta options show improvement in model performance (especially in higher terrain areas), helping to reduce some of the over-prediction biases currently present in regulatory default AERMOD. The SHARP results are relatively unbiased and show that this approach should be seriously considered to address situations dominated by low-wind meander conditions.
Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)
Tessler, Steven
2003-01-01
The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.
Kang, Young Gon; Suh, Eunkyung; Lee, Jae-woo; Kim, Dong Wook; Cho, Kyung Hee; Bae, Chul-Young
2018-01-01
Purpose A comprehensive health index is needed to measure an individual’s overall health and aging status and predict the risk of death and age-related disease incidence, and evaluate the effect of a health management program. The purpose of this study is to demonstrate the validity of estimated biological age (BA) in relation to all-cause mortality and age-related disease incidence based on National Sample Cohort database. Patients and methods This study was based on National Sample Cohort database of the National Health Insurance Service – Eligibility database and the National Health Insurance Service – Medical and Health Examination database of the year 2002 through 2013. BA model was developed based on the National Health Insurance Service – National Sample Cohort (NHIS – NSC) database and Cox proportional hazard analysis was done for mortality and major age-related disease incidence. Results For every 1 year increase of the calculated BA and chronological age difference, the hazard ratio for mortality significantly increased by 1.6% (1.5% in men and 2.0% in women) and also for hypertension, diabetes mellitus, heart disease, stroke, and cancer incidence by 2.5%, 4.2%, 1.3%, 1.6%, and 0.4%, respectively (p<0.001). Conclusion Estimated BA by the developed BA model based on NHIS – NSC database is expected to be used not only as an index for assessing health and aging status and predicting mortality and major age-related disease incidence, but can also be applied to various health care fields. PMID:29593385
Nikjou, A; Sadeghi, M
2018-06-01
The 123 I radionuclide (T 1/2 = 13.22 h, β+ = 100%) is one of the most potent gamma emitters for nuclear medicine. In this study, the cyclotron production of this radionuclide via different nuclear reactions namely, the 121 Sb(α,2n), 122 Te(d,n), 123 Te(p,n), 124 Te(p,2n), 124 Xe(p,2n), 127 I(p,5n) and 127 I(d,6n) were investigated. The effect of the various phenomenological nuclear level density models such as Fermi gas model (FGM), Back-shifted Fermi gas model (BSFGM), Generalized superfluid model (GSM) and Enhanced generalized superfluid model (EGSM) moreover, the three microscopic level density models were evaluated for predicting of cross sections and production yield predictions. The SRIM code was used to obtain the target thickness. The 123 I excitation function of reactions were calculated by using of the TALYS-1.8, EMPIRE-3.2 nuclear codes and with data which taken from TENDL-2015 database, and finally the theoretical calculations were compared with reported experimental measurements in which taken from EXFOR database. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Non-Arrhenian Viscosity Model for Natural Silicate Melts with Applications to Volcanology
NASA Astrophysics Data System (ADS)
Russell, J. K.; Giordano, D.; Dingwell, D. B.
2005-12-01
Silicate melt viscosity is the most important physical property in volcanic systems. It governs styles and rates of flow, velocity distributions in flowing magma, rates of vesiculation, and, ultimately, sets limits on coherent(vs. fragmented or disrupted) flow. The prediction of melt viscosity over the range of conditions found on terrestrial planets remains a challenge. However, the extraordinary increase in number and quality of published measurements of melt viscosity suggests the possibility of new models. Here we review the attributes of previous models for silicate melt viscosity and, then, present a new predictive model natural silicate melts. The importance of silicate melt viscosity was recognized early [1] and culminated in 2 models for predicting silicate melt viscosity [2,3]. These models used an Arrhenian T-dependence; they were limited by a limited experimental database dominated by high-T measurements. Subsequent models have aimed to: i) extend the compositional range of Arrhenian T-dependent models [4,5]; ii) to develop non-Arrhenian models for limited ranges of composition [6,7,8], iii) to develop new strategies for modelling the composition and T-dependence of viscosity [9,10,11], and, finally, to create chemical models for the non-Arrhenian T-dependence of natural melts [12]. We present a multicomponent model for the compositional and T dependence of silicate melt viscosity based on data spanning a wide range of anhydrous melt compositions. The experimental data include micropenetration and concentric cylinder viscometry measurements covering a viscosity range of 10-1 to 1012 Pa s and a T-range from 700 to 1650°C. These published data provide a high- quality database comprising ~ 800 experimental data on 44 well-characterized melt compositions. Our model uses the Adam-Gibbs equation to capture T-dependence: log η = A + B/[T · log (T/C)] where A, B, and C are adjustable parameters that vary for different melt compositions. We assume that all silicate melts converge to a common, but unknown, high-T limit (e.g., A) and that all compositional dependence is accommodated for by B and C. We adopt a linear compositional dependence for B and C: B = σi=1..n [xi βi] C = σi=1..n [xi γi] where xi's are the mole fractions of oxide components (n=8) and βi and γi are adjustable parameters. The model, therefore, comprises 2 · n+1 adjustable parameters which are optimized for against the experimental database including a common value of A and compositional coefficeints for B and C. The new model reproduces the original database to within experimental uncertainty and can predict the viscosity of silicate melts across the full range of conditions found in Nature. References Cited: [1] Friedman et al., 1963. J Geophys Res 68, 6523-6535. [2] Bottinga Y & Weill D 1972. Am J Sci 272, 438- 475. [3] Shaw HR 1972. Am J Sci 272, 438- 475. [4] Persikov ES 1991. Adv Phys Geochem 9, 1-40. [5] Prusevich AA 1988. Geol Geofiz 29, 67-69. [6] Baker DR 1996. Am Min 81, 126-134. [7] Hess KU & Dingwell DB 1996. Am Min 81, 1297- 1300. [8] Zhang, et al. 2003. Am min 88, 1741- 1752. [9] Russell et al. 2002. Eur J Min 14, 417-428. [10] Russell et al. 2003. Am Min 8, 1390- 1394. [11] Russell JK & Giordano D In Press. Geochim Cosmochim Acta. [12] Giordano D & Dingwell DB 2003. Earth Planet. Sci. Lett. 208, 337-349.
NASA Technical Reports Server (NTRS)
Goetz, Michael B.
2011-01-01
The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.
Evaluation of the National Solar Radiation Database (NSRDB) Using Ground-Based Measurements
NASA Astrophysics Data System (ADS)
Xie, Y.; Sengupta, M.; Habte, A.; Lopez, A.
2017-12-01
Solar resource is essential for a wide spectrum of applications including renewable energy, climate studies, and solar forecasting. Solar resource information can be obtained from ground-based measurement stations and/or from modeled data sets. While measurements provide data for the development and validation of solar resource models and other applications modeled data expands the ability to address the needs for increased accuracy and spatial and temporal resolution. The National Renewable Energy Laboratory (NREL) has developed and regular updates modeled solar resource through the National Solar Radiation Database (NSRDB). The recent NSRDB dataset was developed using the physics-based Physical Solar Model (PSM) and provides gridded solar irradiance (global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance) at a 4-km by 4-km spatial and half-hourly temporal resolution covering 18 years from 1998-2015. A comprehensive validation of the performance of the NSRDB (1998-2015) was conducted to quantify the accuracy of the spatial and temporal variability of the solar radiation data. Further, the study assessed the ability of NSRDB (1998-2015) to accurately capture inter-annual variability, which is essential information for solar energy conversion projects and grid integration studies. Comparisons of the NSRDB (1998-2015) with nine selected ground-measured data were conducted under both clear- and cloudy-sky conditions. These locations provide a high quality data covering a variety of geographical locations and climates. The comparison of the NSRDB to the ground-based data demonstrated that biases were within +/- 5% for GHI and +/-10% for DNI. A comprehensive uncertainty estimation methodology was established to analyze the performance of the gridded NSRDB and includes all sources of uncertainty at various time-averaged periods, a method that is not often used in model evaluation. Further, the study analyzed the inter-annual and mean-anomaly of the 18 years of solar radiation data. This presentation will outline the validation methodology and provide detailed results of the comparison.
Assessing animal welfare in sow herds using data on meat inspection, medication and mortality.
Knage-Rasmussen, K M; Rousing, T; Sørensen, J T; Houe, H
2015-03-01
This paper aims to contribute to the development of a cost-effective alternative to expensive on-farm animal-based welfare assessment systems. The objective of the study was to design an animal welfare index based on central database information (DBWI), and to validate it against an animal welfare index based on-farm animal-based measurements (AWI). Data on 63 Danish sow herds with herd-sizes of 80 to 2500 sows and an average herd size of 501 were collected from three central databases containing: Meat inspection data collected at animal level in the abattoir, mortality data at herd level from the rendering plants of DAKA, and medicine records at both herd and animal group level (sow with piglets, weaners or finishers) from the central database Vetstat. Selected measurements taken from these central databases were used to construct the DBWI. The relative welfare impacts of both individual database measurements and the databases overall were assigned in consultation with a panel consisting of 12 experts. The experts were drawn from production advisory activities, animal science and in one case an animal welfare organization. The expert panel weighted each measurement on a scale from 1 (not-important) to 5 (very important). The experts also gave opinions on the relative weightings of measurements for each of the three databases by stating a relative weight of each database in the DBWI. On the basis of this, the aggregated DBWI was normalized. The aggregation of AWI was based on weighted summary of herd prevalence's of 20 clinical and behavioural measurements originating from a 1 day data collection. AWI did not show linear dependency of DBWI. This suggests that DBWI is not suited to replace an animal welfare index using on-farm animal-based measurements.
The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition
NASA Astrophysics Data System (ADS)
Fong, Joseph; Cheung, San Kuen
In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.
Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K.; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C.; Hoeng, Julia
2015-01-01
With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com PMID:25887162
Health and productivity as a business strategy.
Loeppke, Ronald; Taitel, Michael; Richling, Dennis; Parry, Thomas; Kessler, Ronald C; Hymel, Pam; Konicki, Doris
2007-07-01
The objective of this study is to assess the magnitude of health-related lost productivity relative to medical and pharmacy costs for four employers and assess the business implications of a "full-cost" approach to managing health. A database was developed by integrating medical and pharmacy claims data with employee self-report productivity and health information collected through the Health and Work Performance Questionnaire (HPQ). Information collected on employer business measures were combined with this database to model health-related lost productivity. 1) Health-related productivity costs were more than four times greater than medical and pharmacy costs. 2) The full cost of poor health is driven by different health conditions than those driving medical and pharmacy costs alone. This study demonstrates that Integrated Population Health & Productivity Management should be built on a foundation of Integrated Population Health & Productivity Measurement. Therefore, employers would reveal a blueprint for action for their integrated health and productivity enhancement strategies by measuring the full health and productivity costs related to the burdens of illness and health risk in their population.
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
Implementation of the CUAHSI information system for regional hydrological research and workflow
NASA Astrophysics Data System (ADS)
Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid
2013-04-01
Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.
Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.
2006-01-01
Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
Technical Work Plan for: Thermodynamic Database for Chemical Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
C.F. Jovecolon
The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parametermore » values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species.« less
Component, Context and Manufacturing Model Library (C2M2L)
2013-03-01
Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Near Earth Asteroid Characteristics for Asteroid Threat Assessment
NASA Technical Reports Server (NTRS)
Dotson, Jessie
2015-01-01
Information about the physical characteristics of Near Earth Asteroids (NEAs) is needed to model behavior during atmospheric entry, to assess the risk of an impact, and to model possible mitigation techniques. The intrinsic properties of interest to entry and mitigation modelers, however, rarely are directly measureable. Instead we measure other properties and infer the intrinsic physical properties, so determining the complete set of characteristics of interest is far from straightforward. In addition, for the majority of NEAs, only the basic measurements exist so often properties must be inferred from statistics of the population of more completely characterized objects. We will provide an assessment of the current state of knowledge about the physical characteristics of importance to asteroid threat assessment. In addition, an ongoing effort to collate NEA characteristics into a readily accessible database for use by the planetary defense community will be discussed.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
Re-thinking organisms: The impact of databases on model organism biology.
Leonelli, Sabina; Ankeny, Rachel A
2012-03-01
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species. Copyright © 2011 Elsevier Ltd. All rights reserved.
System, method and apparatus for generating phrases from a database
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.
Ontological interpretation of biomedical database content.
Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan
2017-06-26
Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is addressed by a method that identifies implicit knowledge behind semantic annotations in biological databases and grounds it in an expressive upper-level ontology. The result is a seamless representation of database structure, content and annotations as OWL models.
Integration of NASA/GSFC and USGS Rock Magnetic Databases.
NASA Astrophysics Data System (ADS)
Nazarova, K. A.; Glen, J. M.
2004-05-01
A global Magnetic Petrology Database (MPDB) was developed and continues to be updated at NASA/Goddard Space Flight Center. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via the Internet for a more realistic interpretation of satellite (as well as aeromagnetic and ground) lithospheric magnetic anomalies. The MPDB contains data on rocks from localities around the world (about 19,000 samples) including the Ukranian and Baltic Shields, Kamchatka, Iceland, Urals Mountains, etc. The MPDB is designed, managed and presented on the web as a research oriented database. Several database applications have been specifically developed for data manipulation and analysis of the MPDB. The geophysics unit at the USGS in Menlo Park has over 17,000 rock-property data, largely from sites within the western U.S. This database contains rock-density and rock-magnetic parameters collected for use in gravity and magnetic field modeling, and paleomagnetic studies. Most of these data were taken from surface outcrops and together they span a broad range of rock types. Measurements were made either in-situ at the outcrop, or in the laboratory on hand samples and paleomagnetic cores acquired in the field. The USGS and NASA/GSFC data will be integrated as part of an effort to provide public access to a single, uniformly maintained database. Due to the large number of data and the very large area sampled, the database can yield rock-property statistics on a broad range of rock types; it is thus applicable to study areas beyond the geographic scope of the database. The intent of this effort is to provide incentive for others to further contribute to the database, and a tool with which the geophysical community can entertain studies formerly precluded.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
Results on three predictions for July 2012 federal elections in Mexico based on past regularities.
Hernández-Saldaña, H
2013-01-01
The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.
Results on Three Predictions for July 2012 Federal Elections in Mexico Based on Past Regularities
Hernández-Saldaña, H.
2013-01-01
The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment. PMID:24386103
A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer
NASA Astrophysics Data System (ADS)
Liu, Yuli; Buehler, Stefan; Liu, Heguang
2017-04-01
Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.
Applying manifold learning techniques to the CAESAR database
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Patrick, James; Arnold, Gregory; Ferrara, Matthew
2010-04-01
Understanding and organizing data is the first step toward exploiting sensor phenomenology for dismount tracking. What image features are good for distinguishing people and what measurements, or combination of measurements, can be used to classify the dataset by demographics including gender, age, and race? A particular technique, Diffusion Maps, has demonstrated the potential to extract features that intuitively make sense [1]. We want to develop an understanding of this tool by validating existing results on the Civilian American and European Surface Anthropometry Resource (CAESAR) database. This database, provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International, is a rich dataset which includes 40 traditional, anthropometric measurements of 4400 human subjects. If we could specifically measure the defining features for classification, from this database, then the future question will then be to determine a subset of these features that can be measured from imagery. This paper briefly describes the Diffusion Map technique, shows potential for dimension reduction of the CAESAR database, and describes interesting problems to be further explored.
NASA Astrophysics Data System (ADS)
Hassani, B.; Atkinson, G. M.
2015-12-01
One of the most important issues in developing accurate ground-motion prediction equations (GMPEs) is the effective use of limited regional site information in developing a site effects model. In modern empirical GMPE models site effects are usually characterized by simplified parameters that describe the overall near-surface effects on input ground-motion shaking. The most common site effects parameter is the time-averaged shear-wave velocity in the upper 30 m (VS30), which has been used in the Next Generation Attenuation-West (NGA-West) and NGA-East GMPEs, and is widely used in building code applications. For the NGA-East GMPE database, only 6% of the stations have measured VS30 values, while the rest have proxy-based VS30 values. Proxy-based VS30 values are derived from a weighted average of different proxies' estimates such as topographic slope and surface geology proxies. For the proxy-based approaches, the uncertainty in the estimation of Vs30 is significantly higher (~0.25, log10 units) than that for stations with measured VS30(0.04, log10 units); this translates into error in site amplification and hence increased ground motion variability. We introduce a new VS30 proxy as a function of the site fundamental frequency (fpeak) using the NGA-East database, and show that fpeak is a particularly effective proxy for sites in central and eastern North America We first use horizontal to vertical spectra ratios (H/V) of 5%-damped pseudo spectral acceleration (PSA) to find the fpeak values for the recording stations. We develop an fpeak-based VS30 proxy by correlating the measured VS30 values with the corresponding fpeak value. The uncertainty of the VS30 estimate using the fpeak-based model is much lower (0.14, log10 units) than that for the proxy-based methods used in the NGA-East database (0.25 log10 units). The results of this study can be used to recalculate the VS30 values more accurately for stations with known fpeak values (23% of the stations), and potentially reduce the overall variability of the developed NGA-East GMPE models.
Advances in Global Adjoint Tomography - Data Assimilation and Inversion Strategy
NASA Astrophysics Data System (ADS)
Ruan, Y.; Lei, W.; Lefebvre, M. P.; Modrak, R. T.; Smith, J. A.; Bozdag, E.; Tromp, J.
2016-12-01
Seismic tomography provides the most direct way to understand Earth's interior by imaging elastic heterogeneity, anisotropy and anelasticity. Resolving thefine structure of these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models. On the supercomputer "Titan" at Oak Ridge National Laboratory, we are employing a spectral-element method (Komatitsch & Tromp 1999, 2002) in combination with an adjoint method (Tromp et al., 2005) to accurately calculate theoretical seismograms and Frechet derivatives. Using 253 carefully selected events, Bozdag et al. (2016) iteratively determined a transversely isotropic earth model (GLAD_M15) using 15 preconditioned conjugate-gradient iterations. To obtain higher resolution images of the mantle, we have expanded our database to more than 4,220 Mw5.0-7.0 events occurred between 1995 and 2014. Instead of using the entire database all at once, we choose to draw subsets of about 1,000 events from our database for each iteration to achieve a faster convergence rate with limited computing resources. To provide good coverage of deep structures, we selected approximately 700 deep and intermedia earthquakes and 300 shallow events to start a new iteration. We reinverted the CMT solutions of these events in the latest model, and recalculated synthetic seismograms. Using the synthetics as reference seismograms, we selected time windows that show good agreement with data and make measurements within the windows. From the measurements we further assess the overall quality of each event and station, and exclude bad measurements base upon certain criteria. So far, with very conservative criteria, we have assimilated more than 8.0 million windows from 1,000 earthquakes in three period bands for the new iteration. For subsequent iterations, we will change the period bands and window selecting criteria to include more window. In the inversion, dense array data (e.g., USArray) usually dominate model updates. In order to better handle this issue, we introduced weighting of stations and events based upon their relative distance and showed that the contribution from dense array is better balanced in the Frechet derivatives. We will present a summary of this form of data assimilation and preliminary results of the first few iterations.
Maciejewski, Matthew L; Liu, Chuan-Fen; Fihn, Stephan D
2009-01-01
To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R(2) statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. Administrative data-based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed.
MatProps: Material Properties Database and Associated Access Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durrenberger, J K; Becker, R C; Goto, D M
2007-08-13
Coefficients for analytic constitutive and equation of state models (EOS), which are used by many hydro codes at LLNL, are currently stored in a legacy material database (Steinberg, UCRL-MA-106349). Parameters for numerous materials are available through this database, and include Steinberg-Guinan and Steinberg-Lund constitutive models for metals, JWL equations of state for high explosives, and Mie-Gruniesen equations of state for metals. These constitutive models are used in most of the simulations done by ASC codes today at Livermore. Analytic EOSs are also still used, but have been superseded in many cases by tabular representations in LEOS (http://leos.llnl.gov). Numerous advanced constitutivemore » models have been developed and implemented into ASC codes over the past 20 years. These newer models have more physics and better representations of material strength properties than their predecessors, and therefore more model coefficients. However, a material database of these coefficients is not readily available. Therefore incorporating these coefficients with those of the legacy models into a portable database that could be shared amongst codes would be most welcome. The goal of this paper is to describe the MatProp effort at LLNL to create such a database and associated access library that could be used by codes throughout the DOE complex and beyond. We have written an initial version of the MatProp database and access library and our DOE/ASC code ALE3D (Nichols et. al., UCRL-MA-152204) is able to import information from the database. The database, a link to which exists on the Sourceforge server at LLNL, contains coefficients for many materials and models (see Appendix), and includes material parameters in the following categories--flow stress, shear modulus, strength, damage, and equation of state. Future versions of the Matprop database and access library will include the ability to read and write material descriptions that can be exchanged between codes. It will also include an ability to do unit changes, i.e. have the library return parameters in user-specified unit systems. In addition to these, additional material categories can be added (e.g., phase change kinetics, etc.). The Matprop database and access library is part of a larger set of tools used at LLNL for assessing material model behavior. One of these is MSlib, a shared constitutive material model library. Another is the Material Strength Database (MSD), which allows users to compare parameter fits for specific constitutive models to available experimental data. Together with Matprop, these tools create a suite of capabilities that provide state-of-the-art models and parameters for those models to integrated simulation codes. This document is broken into several appendices. Appendix A contains a code example to retrieve several material coefficients. Appendix B contains the API for the Matprop data access library. Appendix C contains a list of the material names and model types currently available in the Matprop database. Appendix D contains a list of the parameter names for the currently recognized model types. Appendix E contains a full xml description of the material Tantalum.« less
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) • Repository of national hazard models • Uniform global hazard model Armed with these tools and databases, stakeholders worldwide will then be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Earthquake hazard information will be able to be combined with data on exposure (buildings, population) and data on their vulnerability, for risk assessment around the globe. Furthermore, for a truly integrated view of seismic risk, users will be able to add social vulnerability and resilience indices and estimate the costs and benefits of different risk management measures. Having finished its first five-year Work Program at the end of 2013, GEM has entered into its second five-year Work Program 2014-2018. Beyond maintaining and enhancing the products developed in Work Program 1, the second phase will have a stronger focus on regional hazard and risk activities, and on seeing GEM products used for risk assessment and risk management practice at regional, national and local scales. Furthermore GEM intends to partner with similar initiatives underway for other natural perils, which together are needed to meet the need for advanced risk assessment methods, tools and data to underpin global disaster risk reduction efforts under the Hyogo Framework for Action #2 to be launched in Sendai/Japan in spring 2015
2018-01-01
Profile Database E-17 Attachment 2: NRMM Data Input Requirements E-25 Attachment 3: General Physics -Based Model Data Input Requirements E-28...E-15 Figure E-11 Examples of Unique Surface Types E-20 Figure E-12 Correlating Physical Testing with Simulation E-21 Figure E-13 Simplified Tire...Table 10-8 Scoring Values 10-19 Table 10-9 Accuracy – Physics -Based 10-20 Table 10-10 Accuracy – Validation Through Measurement 10-22 Table 10-11
Insertion algorithms for network model database management systems
NASA Astrophysics Data System (ADS)
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
John Hof; Curtis Flather; Tony Baltic; Stephen Davies
1999-01-01
The 1999 forest and rangeland condition indicator model is a set of independent econometric production functions for environmental outputs (measured with condition indicators) at the national scale. This report documents the development of the database and the statistical estimation required by this particular production structure with emphasis on two special...
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
Thermodynamic properties for arsenic minerals and aqueous species
Nordstrom, D. Kirk; Majzlan, Juraj; Königsberger, Erich; Bowell, Robert J.; Alpers, Charles N.; Jamieson, Heather E.; Nordstrom, D. Kirk; Majzlan, Juraj
2014-01-01
Quantitative geochemical calculations are not possible without thermodynamic databases and considerable advances in the quantity and quality of these databases have been made since the early days of Lewis and Randall (1923), Latimer (1952), and Rossini et al. (1952). Oelkers et al. (2009) wrote, “The creation of thermodynamic databases may be one of the greatest advances in the field of geochemistry of the last century.” Thermodynamic data have been used for basic research needs and for a countless variety of applications in hazardous waste management and policy making (Zhu and Anderson 2002; Nordstrom and Archer 2003; Bethke 2008; Oelkers and Schott 2009). The challenge today is to evaluate thermodynamic data for internal consistency, to reach a better consensus of the most reliable properties, to determine the degree of certainty needed for geochemical modeling, and to agree on priorities for further measurements and evaluations.
NASA Astrophysics Data System (ADS)
Regberg, A. B.; Singha, K.; Picardal, F.; Brantley, S. L.
2011-12-01
Previous research has linked measured changes in the bulk electrical conductivity (σb) of water-saturated sediments to the respiration and growth of anaerobic bacteria. If the mechanism causing this signal is understood and characterized it could be used to identify and monitor zones of bacterial activity in the subsurface. The 1-D reactive transport model PHREEQC was used to understand σb signals by modeling chemical gradients within two column reactors and corresponding changes in effluent chemistry. The flow-through column reactors were packed with Fe(III)-bearing sediment from Oyster, VA and inoculated with an environmental consortia of microorganisms. Influent in the first reactor was amended with 1mM Na-acetate to encourage the growth of iron-reducing bacteria. Influent in the second reactor was amended with 0.1mM Na-Acetate and 2mM NaNO3 to encourage the growth of nitrate-reducing bacteria. While effluent concentrations of acetate, Fe(II), NO3-, NO2-, and NH4+ remained at steady state, we measured a 3-fold increase (0.055 S/m - 0.2 S/m) in σb in the iron-reducing column and a 10-fold increase in σb (0.07 S/m - 0.8 S/m) in the nitrate-reducing column over 198 days. The ionic strength in both reactors remained constant through time indicating that the measured increases in σb were not caused by changing effluent concentrations. PHREEQC successfully matched the measured changes in effluent concentrations for both columns when the reaction database was modified in the following manner. For the iron-reducing column, kinetic expressions governing the rate of iron reduction, the rate of bacterial growth, and the production of methane were added to the reaction database. Additionally, surface adsorption and cation exchange reactions were added so that the model was consistent with measured effluent chemistry. For the nitrate-reducing column, kinetic expressions governing nitrate reduction and bacterial growth were added to the reaction database. Additionally, immobile porosity was added along with adsorption and cation exchange reactions. Although the model revealed the existence of chemical and biological gradients within the columns that were not discernable as changes in effluent concentrations, none of the chemical reactions or gradients could explain the measured σb increases in either column. This result is not consistent with chemical gradients within the column reactor causing the measured changes in σb. To test the alternate hypothesis that microbial biofilms are electrically conductive, we used the output from PHREEQC to calculate the amount of biomass produced within the column reactors. If biofilm causes the σb changes, our model is consistent with an electrical conductivity for biomass in the iron-reducing column between 2.75 and 220 S/m. The model is also consistent with an electrical conductivity for biomass in the nitrate-reducing column between 350 and 35,000 S/m. These estimates of biomass electrical conductivity are poorly constrained but represent a first step towards understanding the electrical properties associated with respiring biofilms.
Clary, Christelle M; Kestens, Yan
2013-06-19
Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a "fair" representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question.
Fuzzy queries above relational database
NASA Astrophysics Data System (ADS)
Smolka, Pavel; Bradac, Vladimir
2017-11-01
The aim of the theme is to introduce a possibility of fuzzy queries implemented in relational databases. The issue is described on a model which identifies the appropriate part of the problem domain for fuzzy approach. The model is demonstrated on a database of wines focused on searching in it. The construction of the database complies with the Law of the Czech Republic.
The establishment of the atmospheric emission inventories of the ESCOMPTE program
NASA Astrophysics Data System (ADS)
François, S.; Grondin, E.; Fayet, S.; Ponche, J.-L.
2005-03-01
Within the frame of the ESCOMPTE program, a spatial emission inventory and an emission database aimed at tropospheric photochemistry intercomparison modeling has been developed under the scientific supervision of the LPCA with the help of the regional coordination of Air Quality network AIRMARAIX. This inventory has been established for all categories of sources (stationary, mobile and biogenic sources) over a domain of 19,600 km 2 centered on the cities of Marseilles-Aix-en-Provence in the southeastern part of France with a spatial resolution of 1 km 2. A yearly inventory for 1999 has been established, and hourly emission inventories for 23 days of June and July 2000 and 2001, corresponding to the intensive measurement periods, have been produced. The 104 chemical species in the inventory have been selected to be relevant with respect to photochemistry modeling according to available data. The entire list of species in the inventory numbers 216 which will allow other future applications of this database. This database is presently the most detailed and complete regional emission database in France. In addition, the database structure and the emission calculation modules have been designed to ensure a better sustainability and upgradeability, being provided with appropriate maintenance software. The general organization and method is summarized and the results obtained for both yearly and hourly emissions are detailed and discussed. Some comparisons have been performed with the existing results in this region to ensure the congruency of the results. This leads to confirm the relevance and the consistency of the ESCOMPTE emission inventory.
NASA Technical Reports Server (NTRS)
Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave
2015-01-01
The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.
Data management of a multilaboratory field program using distributed processing. [PRECP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
The PRECP program is a multilaboratory research effort conducted by the US Department of Energy as a part of the National Acid Precipitation Assessment Program (NAPAP). The primary objective of PRECP is to provide essential information for the quantitative description of chemical wet deposition as a function of air pollution loadings, geograpic location, and atmospheric processing. The program is broken into four closely interrelated sectors: Diagnostic Modeling; Field Measurements; Laboratory Measurements; and Climatological Evaluation. Data management tasks are: compile databases of the data collected in field studies; verify the contents of data sets; make data available to program participants eithermore » on-line or by means of computer tapes; perform requested analyses, graphical displays, and data aggregations; provide an index of what data is available; and provide documentation for field programs both as part of the computer database and as data reports.« less
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
A passive and active microwave-vector radiative transfer (PAM-VRT) model
NASA Astrophysics Data System (ADS)
Yang, Jun; Min, Qilong
2015-11-01
A passive and active microwave vector radiative transfer (PAM-VRT) package has been developed. This fast and accurate forward microwave model, with flexible and versatile input and output components, self-consistently and realistically simulates measurements/radiation of passive and active microwave sensors. The core PAM-VRT, microwave radiative transfer model, consists of five modules: gas absorption (two line-by-line databases and four fast models); hydrometeor property of water droplets and ice (spherical and nonspherical) particles; surface emissivity (from Community Radiative Transfer Model (CRTM)); vector radiative transfer of successive order of scattering (VSOS); and passive and active microwave simulation. The PAM-VRT package has been validated against other existing models, demonstrating good accuracy. The PAM-VRT not only can be used to simulate or assimilate measurements of existing microwave sensors, but also can be used to simulate observation results at some new microwave sensors.
Deriving the expected utility of a predictive model when the utilities are uncertain.
Cooper, Gregory F; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.
Evolution of the social network of scientific collaborations
NASA Astrophysics Data System (ADS)
Barabási, A. L.; Jeong, H.; Néda, Z.; Ravasz, E.; Schubert, A.; Vicsek, T.
2002-08-01
The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991-98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.
NASA Astrophysics Data System (ADS)
Desservettaz, M.; Fisher, J. A.; Jones, N. B.; Bukosa, B.; Greenslade, J.; Luhar, A.; Woodhouse, M.; Griffith, D. W. T.; Velazco, V. A.
2016-12-01
Australia contributes approximately 6% of global biomass burning CO2 emissions, mostly from savanna type fires. This estimate comes from biomass burning inventories that use emission factors derived from field campaigns performed outside Australia. The relevance of these emission factors to the Australian environment has not previously been evaluated and therefore needs to be tested. Here we compare predictions from the chemical transport model GEOS-Chem and the global chemistry-climate model ACCESS-UKCA run using different biomass burning inventories to total column measurements of CO, C2H6 and HCHO, in order to identify the most representative inventory for Australian fire emissions. The measurements come from the Network for Detection of Atmospheric Composition Change (NDACC) and Total Carbon Column Observing Network (TCCON) solar remote sensing Fourier transform spectrometers and satellite measurements from IASI and OMI over Australia. We evaluate three inventories: the Global Fire Emission Database version 4 - GFED4 (Giglio et al. 2013), the Fire Inventory from NCAR - FINN (Wiedinmyer et al. 2011), the Quick Fire Emission Database - QFED from NASA and the MACCity emission inventory (from the MACC/CityZEN EU projects; Angiola et al. 2010). From this evaluation we aim to give recommendations for the most appropriate inventory to use for different Australian environments. We also plan to examine any significant concentration variations arising from the differences between the two model setups.
Research Directions in Database Security IV
1993-07-01
second algorithm, which is based on multiversion timestamp ordering, is that high level transactions can be forced to read arbitrarily old data values...system. The first, the single ver- sion model, stores only the latest veision of each data item, while the second, the 88 multiversion model, stores... Multiversion Database Model In the standard database model, where there is only one version of each data item, all transactions compete for the most recent
Sharrow, David J; Anderson, James J
2016-12-01
The rise in human life expectancy has involved declines in intrinsic and extrinsic mortality processes associated, respectively, with senescence and environmental challenges. To better understand the factors driving this rise, we apply a two-process vitality model to data from the Human Mortality Database. Model parameters yield intrinsic and extrinsic cumulative survival curves from which we derive intrinsic and extrinsic expected life spans (ELS). Intrinsic ELS, a measure of longevity acted on by intrinsic, physiological factors, changed slowly over two centuries and then entered a second phase of increasing longevity ostensibly brought on by improvements in old-age death reduction technologies and cumulative health behaviors throughout life. The model partitions the majority of the increase in life expectancy before 1950 to increasing extrinsic ELS driven by reductions in environmental, event-based health challenges in both childhood and adulthood. In the post-1950 era, the extrinsic ELS of females appears to be converging to the intrinsic ELS, whereas the extrinsic ELS of males is approximately 20 years lower than the intrinsic ELS.
Anderson, James J.
2016-01-01
The rise in human life expectancy has involved declines in intrinsic and extrinsic mortality processes associated, respectively, with senescence and environmental challenges. To better understand the factors driving this rise, we apply a two-process vitality model to data from the Human Mortality Database. Model parameters yield intrinsic and extrinsic cumulative survival curves from which we derive intrinsic and extrinsic expected life spans (ELS). Intrinsic ELS, a measure of longevity acted on by intrinsic, physiological factors, changed slowly over two centuries and then entered a second phase of increasing longevity ostensibly brought on by improvements in old-age death reduction technologies and cumulative health behaviors throughout life. The model partitions the majority of the increase in life expectancy before 1950 to increasing extrinsic ELS driven by reductions in environmental, event-based health challenges in both childhood and adulthood. In the post-1950 era, the extrinsic ELS of females appears to be converging to the intrinsic ELS, whereas the extrinsic ELS of males is approximately 20 years lower than the intrinsic ELS. PMID:27837429
NASA Astrophysics Data System (ADS)
Couach, O.; Balin, I.; Jimenez, R.; Quaglia, P.; Kirchner, F.; Ristori, P.; Simeonov, V.; Clappier, A.; van den Bergh, H.; Calpini, B.
In order to understand, to predict and to elaborate solutions concerning the photo- chemical and meteorological processes, which occur often in the summer time over the Grenoble city and its three surroundings valleys, both modeling and measurement approaches were considered. Two intensive air pollution and meteorological measure- ments campaigns were performed in 1998 and 1999. Ozone (O3) and other pollutants (NOx, CH2O, SO2, etc) as well as wind, temperature, solar radiation and relative hu- midity were intensively measured at surface level combined with 3D measurements range by using: an instrumented aircraft (Metair), two ozone lidars (e.g. EPFL ozone dial lidar) and wind profilers (e.g.Degreane). This poster will focus on the main results of these measurements like the 3D ozone distribution, the mixing height/planetary boundary layer evolution, the meteorological behavior, and the other pollutants evalu- ation. The paper also highlights the use of these measurements as a necessary database for comparison and checking (validation) of the model performances and thus to allow modeling solutions in predicting the air pollution events and thus permitting to build the right abatement strategies.
Physiological Parameters Database for PBPK Modeling (External Review Draft)
EPA released for public comment a physiological parameters database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence. It also contains similar data for an...
Willemet, Marie; Vennin, Samuel; Alastruey, Jordi
2016-12-08
Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Levy, C.; Beauchamp, C.
1996-01-01
This poster describes the methods used and working prototype that was developed from an abstraction of the relational model from the VA's hierarchical DHCP database. Overlaying the relational model on DHCP permits multiple user views of the physical data structure, enhances access to the database by providing a link to commercial (SQL based) software, and supports a conceptual managed care data model based on primary and longitudinal patient care. The goal of this work was to create a relational abstraction of the existing hierarchical database; to construct, using SQL data definition language, user views of the database which reflect the clinical conceptual view of DHCP, and to allow the user to work directly with the logical view of the data using GUI based commercial software of their choosing. The workstation is intended to serve as a platform from which a managed care information model could be implemented and evaluated.
Crasto, Chiquito J.; Marenco, Luis N.; Liu, Nian; Morse, Thomas M.; Cheung, Kei-Hoi; Lai, Peter C.; Bahl, Gautam; Masiar, Peter; Lam, Hugo Y.K.; Lim, Ernest; Chen, Huajin; Nadkarni, Prakash; Migliore, Michele; Miller, Perry L.; Shepherd, Gordon M.
2009-01-01
This article presents the latest developments in neuroscience information dissemination through the SenseLab suite of databases: NeuronDB, CellPropDB, ORDB, OdorDB, OdorMapDB, ModelDB and BrainPharm. These databases include information related to: (i) neuronal membrane properties and neuronal models, and (ii) genetics, genomics, proteomics and imaging studies of the olfactory system. We describe here: the new features for each database, the evolution of SenseLab’s unifying database architecture and instances of SenseLab database interoperation with other neuroscience online resources. PMID:17510162
Observational database for studies of nearby universe
NASA Astrophysics Data System (ADS)
Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.
2012-01-01
We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.
NASA Astrophysics Data System (ADS)
Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.
2013-04-01
The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric mercury monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. The AMNet also collects ancillary meteorological data and information on land-use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 22 unique site locations. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high-elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.isws.illinois.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.
NASA Astrophysics Data System (ADS)
Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.
2013-11-01
The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric-mercury-monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high-elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. AMNet also collects ancillary meteorological data and information on land use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 21 individual sites and instruments. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.sws.uiuc.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.
2006-10-01
Gibbs, E. M., Fletterick, R. J., Day, Y. S. N., Myszka, D. G., and Rath, V. L. (2002) “Structure-activity analysis of the purine-binding site of human ...Rich, R. L., Day, Y. S. N., Morton, T. A., and Myszka, D. G., (2001) “High- resolution and high-throughput protocols for measuring drug/ human serum...entire text) 1. Attard, P., Images of nanobubbles on hydrophobic surfaces and their interactions. Phys. Rev. Lett., 2001. 87. 2. Ottino, J.M
Advances in Satellite Microwave Precipitation Retrieval Algorithms Over Land
NASA Astrophysics Data System (ADS)
Wang, N. Y.; You, Y.; Ferraro, R. R.
2015-12-01
Precipitation plays a key role in the earth's climate system, particularly in the aspect of its water and energy balance. Satellite microwave (MW) observations of precipitation provide a viable mean to achieve global measurement of precipitation with sufficient sampling density and accuracy. However, accurate precipitation information over land from satellite MW is a challenging problem. The Goddard Profiling Algorithm (GPROF) algorithm for the Global Precipitation Measurement (GPM) is built around the Bayesian formulation (Evans et al., 1995; Kummerow et al., 1996). GPROF uses the likelihood function and the prior probability distribution function to calculate the expected value of precipitation rate, given the observed brightness temperatures. It is particularly convenient to draw samples from a prior PDF from a predefined database of observations or models. GPROF algorithm does not search all database entries but only the subset thought to correspond to the actual observation. The GPM GPROF V1 database focuses on stratification by surface emissivity class, land surface temperature and total precipitable water. However, there is much uncertainty as to what is the optimal information needed to subset the database for different conditions. To this end, we conduct a database stratification study of using National Mosaic and Multi-Sensor Quantitative Precipitation Estimation, Special Sensor Microwave Imager/Sounder (SSMIS) and Advanced Technology Microwave Sounder (ATMS) and reanalysis data from Modern-Era Retrospective Analysis for Research and Applications (MERRA). Our database study (You et al., 2015) shows that environmental factors such as surface elevation, relative humidity, and storm vertical structure and height, and ice thickness can help in stratifying a single large database to smaller and more homogeneous subsets, in which the surface condition and precipitation vertical profiles are similar. It is found that the probability of detection (POD) increases about 8% and 12% by using stratified databases for rainfall and snowfall detection, respectively. In addition, by considering the relative humidity at lower troposphere and the vertical velocity at 700 hPa in the precipitation detection process, the POD for snowfall detection is further increased by 20.4% from 56.0% to 76.4%.
MODBASE, a database of annotated comparative protein structure models
Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej
2002-01-01
MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309
CARINA data synthesis project: pH data scale unification and cruise adjustments
NASA Astrophysics Data System (ADS)
Velo, A.; Pérez, F. F.; Lin, X.; Key, R. M.; Tanhua, T.; de La Paz, M.; Olsen, A.; van Heuven, S.; Jutterström, S.; Ríos, A. F.
2010-05-01
Data on carbon and carbon-relevant hydrographic and hydrochemical parameters from 188 previously non-publicly available cruise data sets in the Artic Mediterranean Seas (AMS), Atlantic Ocean and Southern Ocean have been retrieved and merged to a new database: CARINA (CARbon IN the Atlantic Ocean). These data have gone through rigorous quality control (QC) procedures to assure the highest possible quality and consistency. The data for most of the measured parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values. Systematic biases found in the data have been corrected in the data products, three merged data files with measured, calculated and interpolated data for each of the three CARINA regions; AMS, Atlantic Ocean and Southern Ocean. Out of a total of 188 cruise entries in the CARINA database, 59 reported pH measured values. All reported pH data have been unified to the Sea-Water Scale (SWS) at 25 °C. Here we present details of the secondary QC of pH in the CARINA database and the scale unification to SWS at 25 °C. The pH scale has been converted for 36 cruises. Procedures of quality control, including crossover analysis between cruises and inversion analysis are described. Adjustments were applied to the pH values for 21 of the cruises in the CARINA dataset. With these adjustments the CARINA database is consistent both internally as well as with the GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s. Based on our analysis we estimate the internal consistency of the CARINA pH data to be 0.005 pH units. The CARINA data are now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates, for ocean acidification assessment and for model validation.
Origin and transport of high energy particles in the galaxy
NASA Technical Reports Server (NTRS)
Wefel, John P.
1987-01-01
The origin, confinement, and transport of cosmic ray nuclei in the galaxy was studied. The work involves interpretations of the existing cosmic ray physics database derived from both balloon and satellite measurements, combined with an effort directed towards defining the next generation of instruments for the study of cosmic radiation. The shape and the energy dependence of the cosmic ray pathlength distribution in the galaxy was studied, demonstrating that the leaky box model is not a good representation of the detailed particle transport over the energy range covered by the database. Alternative confinement methods were investigated, analyzing the confinement lifetime in these models based upon the available data for radioactive secondary isotopes. The source abundances of several isotopes were studied using compiled nuclear physics data and the detailed transport calculations. The effects of distributed particle acceleration on the secondary to primary ratios were investigated.
Peters, Susan; Vermeulen, Roel; Olsson, Ann; Van Gelder, Rainer; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Williams, Nick; Woldbæk, Torill; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Dahmann, Dirk; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2012-01-01
SYNERGY is a large pooled analysis of case-control studies on the joint effects of occupational carcinogens and smoking in the development of lung cancer. A quantitative job-exposure matrix (JEM) will be developed to assign exposures to five major lung carcinogens [asbestos, chromium, nickel, polycyclic aromatic hydrocarbons (PAH), and respirable crystalline silica (RCS)]. We assembled an exposure database, called ExpoSYN, to enable such a quantitative exposure assessment. Existing exposure databases were identified and European and Canadian research institutes were approached to identify pertinent exposure measurement data. Results of individual air measurements were entered anonymized according to a standardized protocol. The ExpoSYN database currently includes 356 551 measurements from 19 countries. In total, 140 666 personal and 215 885 stationary data points were available. Measurements were distributed over the five agents as follows: RCS (42%), asbestos (20%), chromium (16%), nickel (15%), and PAH (7%). The measurement data cover the time period from 1951 to present. However, only a small portion of measurements (1.4%) were performed prior to 1975. The major contributing countries for personal measurements were Germany (32%), UK (22%), France (14%), and Norway and Canada (both 11%). ExpoSYN is a unique occupational exposure database with measurements from 18 European countries and Canada covering a time period of >50 years. This database will be used to develop a country-, job-, and time period-specific quantitative JEM. This JEM will enable data-driven quantitative exposure assessment in a multinational pooled analysis of community-based lung cancer case-control studies.
Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy
2014-01-01
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project – and avert – future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups – including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems – http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015. PMID:25558364
Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy
2014-12-01
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species' threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project - and avert - future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups - including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems - http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.
GMODWeb: a web framework for the generic model organism database
O'Connor, Brian D; Day, Allen; Cain, Scott; Arnaiz, Olivier; Sperling, Linda; Stein, Lincoln D
2008-01-01
The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of model organism database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from . PMID:18570664
New laboratory approach to study Titan ionospheric chemistry
NASA Astrophysics Data System (ADS)
Thissen, R.; Dutuit, O.; Pernot, P.; Carrasco, N.; Lilensten, J.; Quirico, E.; Schmitt, B.
The exploration of Titan reveals a very complex chemistry occurring in the ionospheric region of the atmosphere. In order to interpret the observations performed by the Cassini spectrometers, we need to improve our description of the ion molecule chemistry involving nitrogen and hydrocarbons. Up to now, models are based on databases compiled over the years. These are quite complete to describe the major ions, but lack of accuracy for some of them, they totally neglect the questions of isomerization or chemical functionality in the description of ionic species and still miss a lot of inputs for ionic species heavier than 50 daltons. We propose to improve the databases by systematic measurements of ion molecule reaction rates, and further structural description, by means of a high resolution mass spectrometer, allowing for MS/MS structural analysis of the ionic species. A thorough evaluation of nowadays databases by means of uncertainty propagation will lead our choice of the most important reactions to be studied. This study shall also lead to educated choice for chemistry simplification, which is mandatory in order to include the chemistry in 3D or fluid models of the atmosphere. We plan as well to use extracts from tholins as molecular source for our analysis.
S&MPO - An information system for ozone spectroscopy on the WEB
NASA Astrophysics Data System (ADS)
Babikov, Yurii L.; Mikhailenko, Semen N.; Barbe, Alain; Tyuterev, Vladimir G.
2014-09-01
Spectroscopy and Molecular Properties of Ozone ("S&MPO") is an Internet accessible information system devoted to high resolution spectroscopy of the ozone molecule, related properties and data sources. S&MPO contains information on original spectroscopic data (line positions, line intensities, energies, transition moments, spectroscopic parameters) recovered from comprehensive analyses and modeling of experimental spectra as well as associated software for data representation written in PHP Java Script, C++ and FORTRAN. The line-by-line list of vibration-rotation transitions and other information is organized as a relational database under control of MySQL database tools. The main S&MPO goal is to provide access to all available information on vibration-rotation molecular states and transitions under extended conditions based on extrapolations of laboratory measurements using validated theoretical models. Applications for the S&MPO may include: education/training in molecular physics, radiative processes, laser physics; spectroscopic applications (analysis, Fourier transform spectroscopy, atmospheric optics, optical standards, spectroscopic atlases); applications to environment studies and atmospheric physics (remote sensing); data supply for specific databases; and to photochemistry (laser excitation, multiphoton processes). The system is accessible via Internet on two sites: http://smpo.iao.ru and http://smpo.univ-reims.fr.
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Who's Gonna Pay the Piper for Free Online Databases?
ERIC Educational Resources Information Center
Jacso, Peter
1996-01-01
Discusses new pricing models for some online services and considers the possibilities for the traditional online database market. Topics include multimedia music databases, including copyright implications; other retail-oriented databases; and paying for free databases with advertising. (LRW)
DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
Ann M. Richard
US Environmental Protection Agency, Research Triangle Park, NC, USA
Distributed: Decentralized set of standardized, field-delimited databases,...
Modeling the natural UV irradiation and comparative UV measurements at Moussala BEO (BG)
NASA Astrophysics Data System (ADS)
Tyutyundzhiev, N.; Angelov, Ch; Lovchinov, K.; Nitchev, Hr; Petrov, M.; Arsov, T.
2018-03-01
Studies of and modeling the impact of natural UV irradiation on the human population are of significant importance for human activity and economics. The sharp increase of environmental problems – extraordinary temperature changes, solar irradiation abnormalities, icy rains – raises the question of developing novel means of assessing and predicting potential UV effects. In this paper, we discuss new UV irradiation modeling based on recent real-time measurements at Moussala Basic Environmental Observatory (BEO) on Moussala Peak (2925 m ASL) in Rila Mountain, Bulgaria, and highlight the development and initial validation of portable embedded devices for UV-A, UV-B monitoring using open-source software architecture, narrow bandpass UV sensors, and the popular Arduino controllers. Despite the high temporal resolution of the VIS and UV irradiation measurements, the results obtained reveal the need of new assumptions in order to minimize the discrepancy with available databases.
NASA Astrophysics Data System (ADS)
Howes, N. C.; Georgiou, I. Y.; Hughes, Z. J.; Wolinsky, M. A.
2012-12-01
Channels in fluvio-deltaic and coastal plain settings undergo a progressive series of downstream transitions in hydrodynamics and sediment transport, which is consequently reflected in their morphology and stratigraphic architecture. Conditions progress from uniform fluvial flow to backwater conditions with non-uniform flow, and finally to bi-directional tidal flow or estuarine circulation at the ocean boundary. While significant attention has been given to geomorphic scaling relationships in purely fluvial settings, there have been far fewer studies on the backwater and tidal reaches, and no systematic comparisons. Our study addresses these gaps by analyzing geometric scaling relationships independently in each of the above hydrodynamic regimes and establishes a comparison. To accomplish this goal we have constructed a database of planform geometries including more than 150 channels. In terms of hydrodynamics studies, much of the work on backwater dynamics has concentrated on the Mississippi River, which has very limited tidal influence. We will extend this analysis to include systems with appreciable offshore tidal range, using a numerical hydrodynamic model to study the interaction between backwater dynamics and tides. The database is comprised of systems with a wide range of tectonic, climatic, and oceanic forcings. The scale of these systems, as measured by bankfull width, ranges over three orders of magnitude from the Amazon River in Brazil to the Palix River in Washington. Channel centerlines are extracted from processed imagery, enabling continuous planform measurements of bankfull width, meander wavelength, and sinuosity. Digital terrain and surface models are used to estimate floodplain slopes. Downstream tidal boundary conditions are obtained from the TOPEX 7.1 global tidal model, while upstream boundary conditions such as basin area, relief, and discharge are obtained by linking the databases of Milliman and Meade (2011) and Syvitski (2005). Backwater and tidal length-scales are computed from published data as well as from numerical simulations. An analysis of the database combined with numerical hydrodynamic simulations allows us to organize the results into a process-based classification of coastal rivers. The classification describes the scale, shape, and flow field transitions of coastal rivers as a function of discharge, floodplain slope, and offshore tidal range.
Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database
NASA Astrophysics Data System (ADS)
Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2015-12-01
The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.
NASA Astrophysics Data System (ADS)
Pasteka, Roman; Zahorec, Pavol; Mikuska, Jan; Szalaiova, Viktoria; Papco, Juraj; Krajnak, Martin; Kusnirak, David; Panisova, Jaroslava; Vajda, Peter; Bielik, Miroslav
2014-05-01
In this contribution results of the running project "Bouguer anomalies of new generation and the gravimetrical model of Western Carpathians (APVV-0194-10)" are presented. The existing homogenized regional database (212478 points) was enlarged by approximately 107 500 archive detailed gravity measurements. These added gravity values were measured since the year 1976 to the present, therefore they need to be unified and reprocessed. The improved positions of more than 8500 measured points were acquired by digitizing of archive maps (we recognized some local errors within particular data sets). Besides the local errors (due to the wrong positions, heights or gravity of measured points) we have found some areas of systematic errors probably due to the gravity measurement or processing errors. Some of them were confirmed and consequently corrected by field measurements within the frame of current project. Special attention is paid to the recalculation of the terrain corrections - we have used a new developed software as well as the latest version of digital terrain model of Slovakia DMR-3. Main improvement of the new terrain corrections evaluation algorithm is the possibility to calculate it in the real gravimeter position and involving of 3D polyhedral bodies approximation (accepting the spherical approximation of Earth's curvature). We have realized several tests by means of the introduction of non-standard distant relief effects introduction. A new complete Bouguer anomalies map was constructed and transformed by means of higher derivatives operators (tilt derivatives, TDX, theta-derivatives and the new TDXAS transformation), using the regularization approach. A new interesting regional lineament of probably neotectonic character was recognized in the new map of complete Bouguer anomalies and it was confirmed also by realized in-situ field measurements.
Claims-based risk model for first severe COPD exacerbation.
Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Schatz, Michael; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Merrigan, J F Philip; Duh, Mei Sheng
2018-02-01
To develop and validate a predictive model for first severe chronic obstructive pulmonary disease (COPD) exacerbation using health insurance claims data and to validate the risk measure of controller medication to total COPD treatment (controller and rescue) ratio (CTR). A predictive model was developed and validated in 2 managed care databases: Truven Health MarketScan database and Reliant Medical Group database. This secondary analysis assessed risk factors, including CTR, during the baseline period (Year 1) to predict risk of severe exacerbation in the at-risk period (Year 2). Patients with COPD who were 40 years or older and who had at least 1 COPD medication dispensed during the year following COPD diagnosis were included. Subjects with severe exacerbations in the baseline year were excluded. Risk factors in the baseline period were included as potential predictors in multivariate analysis. Performance was evaluated using C-statistics. The analysis included 223,824 patients. The greatest risk factors for first severe exacerbation were advanced age, chronic oxygen therapy usage, COPD diagnosis type, dispensing of 4 or more canisters of rescue medication, and having 2 or more moderate exacerbations. A CTR of 0.3 or greater was associated with a 14% lower risk of severe exacerbation. The model performed well with C-statistics, ranging from 0.711 to 0.714. This claims-based risk model can predict the likelihood of first severe COPD exacerbation. The CTR could also potentially be used to target populations at greatest risk for severe exacerbations. This could be relevant for providers and payers in approaches to prevent severe exacerbations and reduce costs.
Introducing the GRACEnet/REAP Data Contribution, Discovery, and Retrieval System.
Del Grosso, S J; White, J W; Wilson, G; Vandenberg, B; Karlen, D L; Follett, R F; Johnson, J M F; Franzluebbers, A J; Archer, D W; Gollany, H T; Liebig, M A; Ascough, J; Reyes-Fox, M; Pellack, L; Starr, J; Barbour, N; Polumsky, R W; Gutwein, M; James, D
2013-07-01
Difficulties in accessing high-quality data on trace gas fluxes and performance of bioenergy/bioproduct feedstocks limit the ability of researchers and others to address environmental impacts of agriculture and the potential to produce feedstocks. To address those needs, the GRACEnet (Greenhouse gas Reduction through Agricultural Carbon Enhancement network) and REAP (Renewable Energy Assessment Project) research programs were initiated by the USDA Agricultural Research Service (ARS). A major product of these programs is the creation of a database with greenhouse gas fluxes, soil carbon stocks, biomass yield, nutrient, and energy characteristics, and input data for modeling cropped and grazed systems. The data include site descriptors (e.g., weather, soil class, spatial attributes), experimental design (e.g., factors manipulated, measurements performed, plot layouts), management information (e.g., planting and harvesting schedules, fertilizer types and amounts, biomass harvested, grazing intensity), and measurements (e.g., soil C and N stocks, plant biomass amount and chemical composition). To promote standardization of data and ensure that experiments were fully described, sampling protocols and a spreadsheet-based data-entry template were developed. Data were first uploaded to a temporary database for checking and then were uploaded to the central database. A Web-accessible application allows for registered users to query and download data including measurement protocols. Separate portals have been provided for each project (GRACEnet and REAP) at nrrc.ars.usda.gov/slgracenet/#/Home and nrrc.ars.usda.gov/slreap/#/Home. The database architecture and data entry template have proven flexible and robust for describing a wide range of field experiments and thus appear suitable for other natural resource research projects. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H
2013-12-01
Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Winckelmans, G. S.; Lund, T. S.; Carati, D.; Wray, A. A.
1996-01-01
Subgrid-scale models for Large Eddy Simulation (LES) in both the velocity-pressure and the vorticity-velocity formulations were evaluated and compared in a priori tests using spectral Direct Numerical Simulation (DNS) databases of isotropic turbulence: 128(exp 3) DNS of forced turbulence (Re(sub(lambda))=95.8) filtered, using the sharp cutoff filter, to both 32(exp 3) and 16(exp 3) synthetic LES fields; 512(exp 3) DNS of decaying turbulence (Re(sub(Lambda))=63.5) filtered to both 64(exp 3) and 32(exp 3) LES fields. Gaussian and top-hat filters were also used with the 128(exp 3) database. Different LES models were evaluated for each formulation: eddy-viscosity models, hyper eddy-viscosity models, mixed models, and scale-similarity models. Correlations between exact versus modeled subgrid-scale quantities were measured at three levels: tensor (traceless), vector (solenoidal 'force'), and scalar (dissipation) levels, and for both cases of uniform and variable coefficient(s). Different choices for the 1/T scaling appearing in the eddy-viscosity were also evaluated. It was found that the models for the vorticity-velocity formulation produce higher correlations with the filtered DNS data than their counterpart in the velocity-pressure formulation. It was also found that the hyper eddy-viscosity model performs better than the eddy viscosity model, in both formulations.
ERIC Educational Resources Information Center
Berman, Paul; And Others
This first-year report of the National Effective Transfer Consortium (NETC) summarizes the progress made by the member colleges in creating standardized measures of actual and expected transfer rates and of transfer effectiveness, and establishing a database that would enable valid comparisons among NETC colleges. Following background information…
Data-Based Detection of Potential Terrorist Attacks: Statistical and Graphical Methods
2010-06-01
Naren; Vasquez-Robinet, Cecilia; Watkinson, Jonathan: "A General Probabilistic Model of the PCR Process," Applied Mathematics and Computation 182(1...September 2006. Seminar, Measuring the effect of Length biased sampling, Mathematical Sciences Section, National Security Agency, 19 September 2006...Committee on National Statistics, 9 February 2007. Invited seminar, Statistical Tests for Bullet Lead Comparisons, Department of Mathematics , Butler
The Design and Implement of Tourism Information System Based on GIS
NASA Astrophysics Data System (ADS)
Chunchang, Fu; Nan, Zhang
From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.
The Structure of Autism Symptoms as Measured by the Autism Diagnostic Observation Schedule
ERIC Educational Resources Information Center
Norris, Megan; Lecavalier, Luc; Edwards, Michael C.
2012-01-01
The current study tested several competing models of the autism phenotype using data from modules 1 and 3 of the ADOS. Participants included individuals with ASDs aged 3-18 years (N = 1,409) from the AGRE database. Confirmatory factor analyses were performed on total samples and subsamples based on age and level of functioning. Three primary…
Ice Nucleating Particles around the world - a global review
NASA Astrophysics Data System (ADS)
Kanji, Zamin A.; Atkinson, James; Sierau, Berko; Lohmann, Ulrike
2017-04-01
In the atmosphere the formation of new ice particles at temperatures above -36 °C is due to a subset of aerosol called Ice Nucleating Particles (INP). However, the spatial and temporal evolution of such particles is poorly understood. Current modelling of INP is attempting to estimate the sources and transport of INP, but is hampered by the availability and convenience of INP observations. As part of the EU FP7 project impact of Biogenic versus Anthropogenic emissions on Clouds and Climate: towards a Holistic UnderStanding (BACCHUS), historical and contemporary observations of INP have been collated into a database (http://www.bacchus-env.eu/in/) and are reviewed here. Outside of Europe and North America the coverage of measurements is sparse, especially for modern day climate - in many areas the only measurements available are from the mid-20th century. As well as an overview of all the data in the database, correlations with several accompanying variables are presented. For example, immersion freezing INP seem to be negatively correlated with altitude, whereas CFDC based condensation freezing INP show no height correlation. An initial global parameterisation of INP concentrations taking into account freezing temperature and relative humidity for use in modelling is provided.
Measurement and application of bidirectional reflectance distribution function
NASA Astrophysics Data System (ADS)
Liao, Fei; Li, Lin; Lu, Chengwen
2016-10-01
When a beam of light with certain intensity and distribution reaches the surface of a material, the distribution of the diffused light is related to the incident angle, the receiving angle, the wavelength of the light and the types of the material. Bidirectional Reflectance Distribution Function (BRDF) is a method to describe this distribution. For an optical system, the optical and mechanical materials' BRDF are unique, and if we want to calculate stray light of the system we should know the correct BRDF data of the whole materials. There are fundamental significances in the area of space remote sensor where BRDF is needed in the precise radiation calibration. It is also important in the military field where BRDF can be used in the object identification and target tracking, etc. In this paper, 11 kinds of aerospace materials' BRDF are measured and more than 310,000 groups of BRDF data are achieved , and also a BRDF database is established in China for the first time. With the BRDF data of the database, we can create the detector model, build the stray light radiation surface model in the stray light analysis software. In this way, the stray radiation on the detector can be calculated correctly.
An effective model for store and retrieve big health data in cloud computing.
Goli-Malekabadi, Zohreh; Sargolzaei-Javan, Morteza; Akbari, Mohammad Kazem
2016-08-01
The volume of healthcare data including different and variable text types, sounds, and images is increasing day to day. Therefore, the storage and processing of these data is a necessary and challenging issue. Generally, relational databases are used for storing health data which are not able to handle the massive and diverse nature of them. This study aimed at presenting the model based on NoSQL databases for the storage of healthcare data. Despite different types of NoSQL databases, document-based DBs were selected by a survey on the nature of health data. The presented model was implemented in the Cloud environment for accessing to the distribution properties. Then, the data were distributed on the database by applying the Shard property. The efficiency of the model was evaluated in comparison with the previous data model, Relational Database, considering query time, data preparation, flexibility, and extensibility parameters. The results showed that the presented model approximately performed the same as SQL Server for "read" query while it acted more efficiently than SQL Server for "write" query. Also, the performance of the presented model was better than SQL Server in the case of flexibility, data preparation and extensibility. Based on these observations, the proposed model was more effective than Relational Databases for handling health data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Material Supply Adjustment Process in RAMF-SM, Step 2
2016-06-01
contain. The Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases that has been...Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases used to support the...and computes material shortfalls.1 Several mathematical models and dozens of databases, encompassing thousands of data items, support the
NASA Astrophysics Data System (ADS)
Suckow, A. O.
2013-12-01
Measurements need post-processing to obtain results that are comparable between laboratories. Raw data may need to be corrected for blank, memory, drift (change of reference values with time), linearity (dependence of reference on signal height) and normalized to international reference materials. Post-processing parameters need to be stored for traceability of results. State of the art stable isotope correction schemes are available based on MS Excel (Geldern and Barth, 2012; Gröning, 2011) or MS Access (Coplen, 1998). These are specialized to stable isotope measurements only, often only to the post-processing of a special run. Embedding of algorithms into a multipurpose database system was missing. This is necessary to combine results of different tracers (3H, 3He, 2H, 18O, CFCs, SF6...) or geochronological tools (Sediment dating e.g. with 210Pb, 137Cs), to relate to attribute data (submitter, batch, project, geographical origin, depth in core, well information etc.) and for further interpretation tools (e.g. lumped parameter modelling). Database sub-systems to the LabData laboratory management system (Suckow and Dumke, 2001) are presented for stable isotopes and for gas chromatographic CFC and SF6 measurements. The sub-system for stable isotopes allows the following post-processing: 1. automated import from measurement software (Isodat, Picarro, LGR), 2. correction for sample-to sample memory, linearity, drift, and renormalization of the raw data. The sub-system for gas chromatography covers: 1. storage of all raw data 2. storage of peak integration parameters 3. correction for blank, efficiency and linearity The user interface allows interactive and graphical control of the post-processing and all corrections by export to and plot in MS Excel and is a valuable tool for quality control. The sub-databases are integrated into LabData, a multi-user client server architecture using MS SQL server as back-end and an MS Access front-end and installed in four laboratories to date. Attribute data storage (unique ID for each subsample, origin, project context etc.) and laboratory management features are included. Export routines to Excel (depth profiles, time series, all possible tracer-versus tracer plots...) and modelling capabilities are add-ons. The source code is public domain and available under the GNU general public licence agreement (GNU-GPL). References Coplen, T.B., 1998. A manual for a laboratory information management system (LIMS) for light stable isotopes. Version 7.0. USGS open file report 98-284. Geldern, R.v., Barth, J.A.C., 2012. Optimization of instrument setup and post-run corrections for oxygen and hydrogen stable isotope measurements of water by isotope ratio infrared spectroscopy (IRIS). Limnology and Oceanography: Methods 10, 1024-1036. Gröning, M., 2011. Improved water δ2H and δ18O calibration and calculation of measurement uncertainty using a simple software tool. Rapid Communications in Mass Spectrometry 25, 2711-2720. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.
Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia
2015-01-01
With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.
Drozdovitch, Vladimir; Zhukova, Olga; Germenchuk, Maria; Khrutchinsky, Arkady; Kukhta, Tatiana; Luckyanov, Nickolas; Minenko, Victor; Podgaiskaya, Marina; Savkin, Mikhail; Vakulovsky, Sergey; Voillequé, Paul; Bouville, André
2012-01-01
Results of all available meteorological and radiation measurements that were performed in Belarus during the first three months after the Chernobyl accident were collected from various sources and incorporated into a single database. Meteorological information such as precipitation, wind speed and direction, and temperature in localities were obtained from meteorological station facilities. Radiation measurements include gamma-exposure rate in air, daily fallout, concentration of different radionuclides in soil, grass, cow’s milk and water as well as total beta-activity in cow’s milk. Considerable efforts were made to evaluate the reliability of the measurements that were collected. The electronic database can be searched according to type of measurement, date, and location. The main purpose of the database is to provide reliable data that can be used in the reconstruction of thyroid doses resulting from the Chernobyl accident. PMID:23103580
Computing diffuse fraction of global horizontal solar radiation: A model comparison.
Dervishi, Sokol; Mahdavi, Ardeshir
2012-06-01
For simulation-based prediction of buildings' energy use or expected gains from building-integrated solar energy systems, information on both direct and diffuse component of solar radiation is necessary. Available measured data are, however, typically restricted to global horizontal irradiance. There have been thus many efforts in the past to develop algorithms for the derivation of the diffuse fraction of solar irradiance. In this context, the present paper compares eight models for estimating diffuse fraction of irradiance based on a database of measured irradiance from Vienna, Austria. These models generally involve mathematical formulations with multiple coefficients whose values are typically valid for a specific location. Subsequent to a first comparison of these eight models, three better performing models were selected for a more detailed analysis. Thereby, the coefficients of the models were modified to account for Vienna data. The results suggest that some models can provide relatively reliable estimations of the diffuse fractions of the global irradiance. The calibration procedure could only slightly improve the models' performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamont, Stephen Philip; Brisson, Marcia; Curry, Michael
2011-02-17
Nuclear forensics assessments to determine material process history requires careful comparison of sample data to both measured and modeled nuclear material characteristics. Developing centralized databases, or nuclear forensics libraries, to house this information is an important step to ensure all relevant data will be available for comparison during a nuclear forensics analysis and help expedite the assessment of material history. The approach most widely accepted by the international community at this time is the implementation of National Nuclear Forensics libraries, which would be developed and maintained by individual nations. This is an attractive alternative toan international database since it providesmore » an understanding that each country has data on materials produced and stored within their borders, but eliminates the need to reveal any proprietary or sensitive information to other nations. To support the concept of National Nuclear Forensics libraries, the United States Department of Energy has developed a model library, based on a data dictionary, or set of parameters designed to capture all nuclear forensic relevant information about a nuclear material. Specifically, information includes material identification, collection background and current location, analytical laboratories where measurements were made, material packaging and container descriptions, physical characteristics including mass and dimensions, chemical and isotopic characteristics, particle morphology or metallurgical properties, process history including facilities, and measurement quality assurance information. While not necessarily required, it may also be valuable to store modeled data sets including reactor burn-up or enrichment cascade data for comparison. It is fully expected that only a subset of this information is available or relevant to many materials, and much of the data populating a National Nuclear Forensics library would be process analytical or material accountability measurement data as opposed to a complete forensic analysis of each material in the library.« less
Excitation function of alpha-particle-induced reactions on natNi from threshold to 44 MeV
NASA Astrophysics Data System (ADS)
Uddin, M. S.; Kim, K. S.; Nadeem, M.; Sudár, S.; Kim, G. N.
2017-05-01
Excitation functions of the natNi(α,x)62,63,65Zn, natNi(α,x)56,57Ni and natNi(α,x)56,57,58m+gCo reactions were measured from the respective thresholds to 44MeV using the stacked-foil activation technique. The tests for the beam characterization are described. The radioactivity was measured using HPGe γ-ray detectors. Theoretical calculations on α-particles-induced reactions on natNi were performed using the nuclear model code TALYS-1.8. A few results are new, the others strengthen the database. Our experimental data were compared with results of nuclear model calculations and described the reaction mechanism.
USDA-ARS?s Scientific Manuscript database
The use of swine in biomedical research has increased dramatically in the last decade. Diverse genomic- and proteomic databases have been developed to facilitate research using human and rodent models. Current porcine gene databases, however, lack the robust annotation to study pig models that are...
Linking Multiple Databases: Term Project Using "Sentences" DBMS.
ERIC Educational Resources Information Center
King, Ronald S.; Rainwater, Stephen B.
This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…
Designing Corporate Databases to Support Technology Innovation
ERIC Educational Resources Information Center
Gultz, Michael Jarett
2012-01-01
Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…
Liu, Zhijian; Li, Hao; Cao, Guoqing
2017-07-30
Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria) usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM 2.5 and PM 10 ), temperature, relative humidity, and CO₂ concentration. Our results show that a general regression neural network (GRNN) model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.
2013-01-01
Background Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a “fair” representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Methods Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. Results The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. Conclusions The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question. PMID:23782570
NASA Astrophysics Data System (ADS)
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
2010-05-01
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.
Financing a future for public biological data.
Ellis, L B; Kalumbi, D
1999-09-01
The public web-based biological database infrastructure is a source of both wonder and worry. Users delight in the ever increasing amounts of information available; database administrators and curators worry about long-term financial support. An earlier study of 153 biological databases (Ellis and Kalumbi, Nature Biotechnol., 16, 1323-1324, 1998) determined that near future (1-5 year) funding for over two-thirds of them was uncertain. More detailed data are required to determine the magnitude of the problem and offer possible solutions. This study examines the finances and use statistics of a few of these organizations in more depth, and reviews several economic models that may help sustain them. Six organizations were studied. Their administrative overhead is fairly low; non-administrative personnel and computer-related costs account for 77% of expenses. One smaller, more specialized US database, in 1997, had 60% of total access from US domains; a majority (56%) of its US accesses came from commercial domains, although only 2% of the 153 databases originally studied received any industrial support. The most popular model used to gain industrial support is asymmetric pricing: preferentially charging the commercial users of a database. At least five biological databases have recently begun using this model. Advertising is another model which may be useful for the more general, more heavily used sites. Microcommerce has promise, especially for databases that do not attract advertisers, but needs further testing. The least income reported for any of the databases studied was $50,000/year; applying this rate to 400 biological databases (a lower limit of the number of such databases, many of which require far larger resources) would mean annual support need of at least $20 million. To obtain this level of support is challenging, yet failure to accept the challenge could be catastrophic. lynda@tc.umn. edu
2012-01-01
Background Q-Sweat is a model used for evaluating the post-ganglionic sudomotor function by assessing sweat response. This study aimed to establish the normative database of Q-Sweat test among Chinese individuals since this type of information is currently lacking. Results One hundred and fifty (150) healthy volunteers, 76 men and 74 women with age range of 22–76 years were included. Skin temperature and sweat onset latency measured at the four sites (i.e., the forearm, proximal leg, distal leg, and the foot) did not significantly correlate with age, gender, body height (BH), body weight (BW), and body mass index (BMI) but the total sweat volume measured in all four sites significantly correlated with sex, BH, and BW. Except for the distal leg, the total sweat volume measured at the other three sites had a significant correlation with BMI. In terms of gender, men had larger total sweat volume, with median differences at the forearm, proximal leg, distal leg, and foot of 0.591 μl, 0.693 μl, 0.696 μl, and 0.358 μl, respectively. Regarding BW difference (≥62 and < 62 Kg), those with BW ≥62 Kg had larger total sweat volume. Median differences at the forearm, proximal leg, distal leg, and foot were 0.538 μl, 0.744 μl, 0.695 μl, and 0.338 μl, respectively. There was an uneven distribution of male and female participants in the two BW groups. In all conditions, the total sweat volume recorded at the foot site was the smallest. Conclusion This is the first report to show the normative database of sweat response in Chinese participants evaluated using Q-Sweat device. This normative database can help guide further research on post-ganglionic sudomotor or related clinical practice involving a Chinese population. PMID:22682097
NASA Astrophysics Data System (ADS)
Shchepashchenko, D.; Chave, J.; Phillips, O. L.; Davies, S. J.; Lewis, S. L.; Perger, C.; Dresel, C.; Fritz, S.; Scipal, K.
2017-12-01
Forest monitoring is high on the scientific and political agenda. Global measurements of forest height, biomass and how they change with time are urgently needed as essential climate and ecosystem variables. The Forest Observation System - FOS (http://forest-observation-system.net/) is an international cooperation to establish a global in-situ forest biomass database to support earth observation and to encourage investment in relevant field-based observations and science. FOS aims to link the Remote Sensing (RS) community with ecologists who measure forest biomass and estimating biodiversity in the field for a common benefit. The benefit of FOS for the RS community is the partnering of the most established teams and networks that manage permanent forest plots globally; to overcome data sharing issues and introduce a standard biomass data flow from tree level measurement to the plot level aggregation served in the most suitable form for the RS community. Ecologists benefit from the FOS with improved access to global biomass information, data standards, gap identification and potential improved funding opportunities to address the known gaps and deficiencies in the data. FOS closely collaborate with the Center for Tropical Forest Science -CTFS-ForestGEO, the ForestPlots.net (incl. RAINFOR, AfriTRON and T-FORCES), AusCover, Tropical managed Forests Observatory and the IIASA network. FOS is an open initiative with other networks and teams most welcome to join. The online database provides open access for both metadata (e.g. who conducted the measurements, where and which parameters) and actual data for a subset of plots where the authors have granted access. A minimum set of database values include: principal investigator and institution, plot coordinates, number of trees, forest type and tree species composition, wood density, canopy height and above ground biomass of trees. Plot size is 0.25 ha or large. The database will be essential for validating and calibrating satellite observations and various models.
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
Photosynthesis-irradiance parameters of marine phytoplankton: synthesis of a global data set
NASA Astrophysics Data System (ADS)
Bouman, Heather A.; Platt, Trevor; Doblin, Martina; Figueiras, Francisco G.; Gudmundsson, Kristinn; Gudfinnsson, Hafsteinn G.; Huang, Bangqin; Hickman, Anna; Hiscock, Michael; Jackson, Thomas; Lutz, Vivian A.; Mélin, Frédéric; Rey, Francisco; Pepin, Pierre; Segura, Valeria; Tilstone, Gavin H.; van Dongen-Vogels, Virginie; Sathyendranath, Shubha
2018-02-01
The photosynthetic performance of marine phytoplankton varies in response to a variety of factors, environmental and taxonomic. One of the aims of the MArine primary Production: model Parameters from Space (MAPPS) project of the European Space Agency is to assemble a global database of photosynthesis-irradiance (P-E) parameters from a range of oceanographic regimes as an aid to examining the basin-scale variability in the photophysiological response of marine phytoplankton and to use this information to improve the assignment of P-E parameters in the estimation of global marine primary production using satellite data. The MAPPS P-E database, which consists of over 5000 P-E experiments, provides information on the spatio-temporal variability in the two P-E parameters (the assimilation number, PmB, and the initial slope, αB, where the superscripts B indicate normalisation to concentration of chlorophyll) that are fundamental inputs for models (satellite-based and otherwise) of marine primary production that use chlorophyll as the state variable. Quality-control measures consisted of removing samples with abnormally high parameter values and flags were added to denote whether the spectral quality of the incubator lamp was used to calculate a broad-band value of αB. The MAPPS database provides a photophysiological data set that is unprecedented in number of observations and in spatial coverage. The database will be useful to a variety of research communities, including marine ecologists, biogeochemical modellers, remote-sensing scientists and algal physiologists. The compiled data are available at https://doi.org/10.1594/PANGAEA.874087 (Bouman et al., 2017).
Rapid Response Tools and Datasets for Post-fire Hydrological Modeling
NASA Astrophysics Data System (ADS)
Miller, Mary Ellen; MacDonald, Lee H.; Billmire, Michael; Elliot, William J.; Robichaud, Pete R.
2016-04-01
Rapid response is critical following natural disasters. Flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies after moderate and high severity wildfires. The problem is that mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fires, runoff, and erosion risks also are highly heterogeneous in space, so there is an urgent need for a rapid, spatially-explicit assessment. Past post-fire modeling efforts have usually relied on lumped, conceptual models because of the lack of readily available, spatially-explicit data layers on the key controls of topography, vegetation type, climate, and soil characteristics. The purpose of this project is to develop a set of spatially-explicit data layers for use in process-based models such as WEPP, and to make these data layers freely available. The resulting interactive online modeling database (http://geodjango.mtri.org/geowepp/) is now operational and publically available for 17 western states in the USA. After a fire, users only need to upload a soil burn severity map, and this is combined with the pre-existing data layers to generate the model inputs needed for spatially explicit models such as GeoWEPP (Renschler, 2003). The development of this online database has allowed us to predict post-fire erosion and various remediation scenarios in just 1-7 days for six fires ranging in size from 4-540 km2. These initial successes have stimulated efforts to further improve the spatial extent and amount of data, and add functionality to support the USGS debris flow model, batch processing for Disturbed WEPP (Elliot et al., 2004) and ERMiT (Robichaud et al., 2007), and to support erosion modeling for other land uses, such as agriculture or mining. The design and techniques used to create the database and the modeling interface are readily repeatable for any area or country that has the necessary topography, climate, soil, and land cover datasets.
Nonparametric Bayesian Modeling for Automated Database Schema Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferragut, Erik M; Laska, Jason A
2015-01-01
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
A database of microwave and sub-millimetre ice particle single scattering properties
NASA Astrophysics Data System (ADS)
Ekelund, Robin; Eriksson, Patrick
2016-04-01
Ice crystal particles are today a large contributing factor as to why cold-type clouds such as cirrus remain a large uncertainty in global climate models and measurements. The reason for this is the complex and varied morphology in which ice particles appear, as compared to liquid droplets with an in general spheroidal shape, thus making the description of electromagnetic properties of ice particles more complicated. Single scattering properties of frozen hydrometers have traditionally been approximated by representing the particles as spheres using Mie theory. While such practices may work well in radio applications, where the size parameter of the particles is generally low, comparisons with measurements and simulations show that this assumption is insufficient when observing tropospheric cloud ice in the microwave or sub-millimetre regions. In order to assist the radiative transfer and remote sensing communities, a database of single scattering properties of semi-realistic particles is being produced. The data is being produced using DDA (Discrete Dipole Approximation) code which can treat arbitrarily shaped particles, and Tmatrix code for simpler shapes when found sufficiently accurate. The aim has been to mainly cover frequencies used by the upcoming ICI (Ice Cloud Imager) mission with launch in 2022. Examples of particles to be included are columns, plates, bullet rosettes, sector snowflakes and aggregates. The idea is to treat particles with good average optical properties with respect to the multitude of particles and aggregate types appearing in nature. The database will initially only cover macroscopically isotropic orientation, but will eventually also include horizontally aligned particles. Databases of DDA particles do already exist with varying accessibility. The goal of this database is to complement existing data. Regarding the distribution of the data, the plan is that the database shall be available in conjunction with the ARTS (Atmospheric Radiative Transfer Simulator) project.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer.
NASA Astrophysics Data System (ADS)
Kuzma, H. A.; Boyle, K.; Pullman, S.; Reagan, M. T.; Moridis, G. J.; Blasingame, T. A.; Rector, J. W.; Nikolaou, M.
2010-12-01
A Self Teaching Expert System (SeTES) is being developed for the analysis, design and prediction of gas production from shales. An Expert System is a computer program designed to answer questions or clarify uncertainties that its designers did not necessarily envision which would otherwise have to be addressed by consultation with one or more human experts. Modern developments in computer learning, data mining, database management, web integration and cheap computing power are bringing the promise of expert systems to fruition. SeTES is a partial successor to Prospector, a system to aid in the identification and evaluation of mineral deposits developed by Stanford University and the USGS in the late 1970s, and one of the most famous early expert systems. Instead of the text dialogue used in early systems, the web user interface of SeTES helps a non-expert user to articulate, clarify and reason about a problem by navigating through a series of interactive wizards. The wizards identify potential solutions to queries by retrieving and combining together relevant records from a database. Inferences, decisions and predictions are made from incomplete and noisy inputs using a series of probabilistic models (Bayesian Networks) which incorporate records from the database, physical laws and empirical knowledge in the form of prior probability distributions. The database is mainly populated with empirical measurements, however an automatic algorithm supplements sparse data with synthetic data obtained through physical modeling. This constitutes the mechanism for how SeTES self-teaches. SeTES’ predictive power is expected to grow as users contribute more data into the system. Samples are appropriately weighted to favor high quality empirical data over low quality or synthetic data. Finally, a set of data visualization tools digests the output measurements into graphical outputs.
Human Thermal Model Evaluation Using the JSC Human Thermal Database
NASA Technical Reports Server (NTRS)
Cognata, T.; Bue, G.; Makinen, J.
2011-01-01
The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.
Bio-optical data integration based on a 4 D database system approach
NASA Astrophysics Data System (ADS)
Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.
2015-04-01
Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.
Vila, Javier; Bowman, Joseph D.; Richardson, Lesley; Kincl, Laurel; Conover, Dave L.; McLean, Dave; Mann, Simon; Vecchia, Paolo; van Tongeren, Martie; Cardis, Elisabeth
2016-01-01
Introduction: To date, occupational exposure assessment of electromagnetic fields (EMF) has relied on occupation-based measurements and exposure estimates. However, misclassification due to between-worker variability remains an unsolved challenge. A source-based approach, supported by detailed subject data on determinants of exposure, may allow for a more individualized exposure assessment. Detailed information on the use of occupational sources of exposure to EMF was collected as part of the INTERPHONE-INTEROCC study. To support a source-based exposure assessment effort within this study, this work aimed to construct a measurement database for the occupational sources of EMF exposure identified, assembling available measurements from the scientific literature. Methods: First, a comprehensive literature search was performed for published and unpublished documents containing exposure measurements for the EMF sources identified, a priori as well as from answers of study subjects. Then, the measurements identified were assessed for quality and relevance to the study objectives. Finally, the measurements selected and complementary information were compiled into an Occupational Exposure Measurement Database (OEMD). Results: Currently, the OEMD contains 1624 sets of measurements (>3000 entries) for 285 sources of EMF exposure, organized by frequency band (0 Hz to 300 GHz) and dosimetry type. Ninety-five documents were selected from the literature (almost 35% of them are unpublished technical reports), containing measurements which were considered informative and valid for our purpose. Measurement data and complementary information collected from these documents came from 16 different countries and cover the time period between 1974 and 2013. Conclusion: We have constructed a database with measurements and complementary information for the most common sources of exposure to EMF in the workplace, based on the responses to the INTERPHONE-INTEROCC study questionnaire. This database covers the entire EMF frequency range and represents the most comprehensive resource of information on occupational EMF exposure. It is available at www.crealradiation.com/index.php/en/databases. PMID:26493616
Schell, Scott R
2006-02-01
Enforcement of the Health Insurance Portability and Accountability Act (HIPAA) began in April, 2003. Designed as a law mandating health insurance availability when coverage was lost, HIPAA imposed sweeping and broad-reaching protections of patient privacy. These changes dramatically altered clinical research by placing sizeable regulatory burdens upon investigators with threat of severe and costly federal and civil penalties. This report describes development of an algorithmic approach to clinical research database design based upon a central key-shared data (CK-SD) model allowing researchers to easily analyze, distribute, and publish clinical research without disclosure of HIPAA Protected Health Information (PHI). Three clinical database formats (small clinical trial, operating room performance, and genetic microchip array datasets) were modeled using standard structured query language (SQL)-compliant databases. The CK database was created to contain PHI data, whereas a shareable SD database was generated in real-time containing relevant clinical outcome information while protecting PHI items. Small (< 100 records), medium (< 50,000 records), and large (> 10(8) records) model databases were created, and the resultant data models were evaluated in consultation with an HIPAA compliance officer. The SD database models complied fully with HIPAA regulations, and resulting "shared" data could be distributed freely. Unique patient identifiers were not required for treatment or outcome analysis. Age data were resolved to single-integer years, grouping patients aged > 89 years. Admission, discharge, treatment, and follow-up dates were replaced with enrollment year, and follow-up/outcome intervals calculated eliminating original data. Two additional data fields identified as PHI (treating physician and facility) were replaced with integer values, and the original data corresponding to these values were stored in the CK database. Use of the algorithm at the time of database design did not increase cost or design effort. The CK-SD model for clinical database design provides an algorithm for investigators to create, maintain, and share clinical research data compliant with HIPAA regulations. This model is applicable to new projects and large institutional datasets, and should decrease regulatory efforts required for conduct of clinical research. Application of the design algorithm early in the clinical research enterprise does not increase cost or the effort of data collection.
An online database for informing ecological network models: http://kelpforest.ucsc.edu.
Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).
An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu
Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723
An online database for informing ecological network models: http://kelpforest.ucsc.edu
Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik
1991-01-01
A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.
Soil organic carbon stocks in Alaska estimated with spatial and pedon data
Bliss, Norman B.; Maursetter, J.
2010-01-01
Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.
Wind tunnel measurements for dispersion modelling of vehicle wakes
NASA Astrophysics Data System (ADS)
Carpentieri, Matteo; Kumar, Prashant; Robins, Alan
2012-12-01
Wind tunnel measurements downwind of reduced scale car models have been made to study the wake regions in detail, test the usefulness of existing vehicle wake models, and draw key information needed for dispersion modelling in vehicle wakes. The experiments simulated a car moving in still air. This is achieved by (i) the experimental characterisation of the flow, turbulence and concentration fields in both the near and far wake regions, (ii) the preliminary assessment of existing wake models using the experimental database, and (iii) the comparison of previous field measurements in the wake of a real diesel car with the wind tunnel measurements. The experiments highlighted very large gradients of velocities and concentrations existing, in particular, in the near-wake. Of course, the measured fields are strongly dependent on the geometry of the modelled vehicle and a generalisation for other vehicles may prove to be difficult. The methodology applied in the present study, although improvable, could constitute a first step towards the development of mathematical parameterisations. Experimental results were also compared with the estimates from two wake models. It was found that they can adequately describe the far-wake of a vehicle in terms of velocities, but a better characterisation in terms of turbulence and pollutant dispersion is needed. Parameterised models able to predict velocity and concentrations with fine enough details at the near-wake scale do not exist.
A case study for a digital seabed database: Bohai Sea engineering geology database
NASA Astrophysics Data System (ADS)
Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang
2006-07-01
This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Einaudi, Franco (Technical Monitor)
2000-01-01
The Tropical Rainfall Measuring Mission (TRMM) as a part of NASA's Earth System Enterprise is the first mission dedicated to measuring tropical rainfall through microwave and visible sensors, and includes the first spaceborne rain radar. Tropical rainfall comprises two-thirds of global rainfall. It is also the primary distributor of heat through the atmosphere's circulation. It is this circulation that defines Earth's weather and climate. Understanding rainfall and its variability is crucial to understanding and predicting global climate change. Weather and climate models need an accurate assessment of the latent heating released as tropical rainfall occurs. Currently, cloud model-based algorithms are used to derive latent heating based on rainfall structure. Ultimately, these algorithms can be applied to actual data from TRMM. This study investigates key underlying assumptions used in developing the latent heating algorithms. For example, the standard algorithm is highly dependent on a system's rainfall amount and structure. It also depends on an a priori database of model-derived latent heating profiles based on the aforementioned rainfall characteristics. Unanswered questions remain concerning the sensitivity of latent heating profiles to environmental conditions (both thermodynamic and kinematic), regionality, and seasonality. This study investigates and quantifies such sensitivities and seeks to determine the optimal latent heating profile database based on the results. Ultimately, the study seeks to produce an optimized latent heating algorithm based not only on rainfall structure but also hydrometeor profiles.
NASA Astrophysics Data System (ADS)
Brissebrat, Guillaume; Mastrorillo, Laurence; Ramage, Karim; Boichard, Jean-Luc; Cloché, Sophie; Fleury, Laurence; Klenov, Ludmila; Labatut, Laurent; Mière, Arnaud
2013-04-01
The international HyMeX (HYdrological cycle in the Mediterranean EXperiment) project aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean, with emphasis on high-impact weather events, inter-annual to decadal variability of the Mediterranean coupled system, and associated trends in the context of global change. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data, modelling studies, as well as post event field surveys and value-added products processing. Therefore HyMeX database incorporates various dataset types from different disciplines, either operational or research. The database relies on a strong collaboration between OMP and IPSL data centres. Field data, which are 1D time series, maps or pictures, are managed by OMP team while gridded data (satellite products, model outputs, radar data...) are managed by IPSL team. At present, the HyMeX database contains about 150 datasets, including 80 hydrological, meteorological, ocean and soil in situ datasets, 30 radar datasets, 15 satellite products, 15 atmosphere, ocean and land surface model outputs from operational (re-)analysis or forecasts and from research simulations, and 5 post event survey datasets. The data catalogue complies with international standards (ISO 19115; INSPIRE; Directory Interchange Format; Global Change Master Directory Thesaurus). It includes all the datasets stored in the HyMeX database, as well as external datasets relevant for the project. All the data, whatever the type is, are accessible through a single gateway. The database website http://mistrals.sedoo.fr/HyMeX offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - Forms to document observations or products that will be provided to the database. - A shopping-cart web interface to order in situ data files. - Ftp facilities to access gridded data. The website will soon propose new facilities. Many in situ datasets have been homogenized and inserted in a relational database yet, in order to enable more accurate data selection and download of different datasets in a shared format. Interoperability between the two data centres will be enhanced by the OpenDAP communication protocol associated with the Thredds catalogue software, which may also be implemented in other data centres that manage data of interest for the HyMeX project. In order to meet the operational needs for the HyMeX 2012 campaigns, a day-to-day quick look and report display website has been developed too: http://sop.hymex.org. It offers a convenient way to browse meteorological conditions and data during the campaign periods.
Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.
1993-01-01
Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.
A kinetics database and scripts for PHREEQC
NASA Astrophysics Data System (ADS)
Hu, B.; Zhang, Y.; Teng, Y.; Zhu, C.
2017-12-01
Kinetics of geochemical reactions has been increasingly used in numerical models to simulate coupled flow, mass transport, and chemical reactions. However, the kinetic data are scattered in the literature. To assemble a kinetic dataset for a modeling project is an intimidating task for most. In order to facilitate the application of kinetics in geochemical modeling, we assembled kinetics parameters into a database for the geochemical simulation program, PHREEQC (version 3.0). Kinetics data were collected from the literature. Our database includes kinetic data for over 70 minerals. The rate equations are also programmed into scripts with the Basic language. Using the new kinetic database, we simulated reaction path during the albite dissolution process using various rate equations in the literature. The simulation results with three different rate equations gave difference reaction paths at different time scale. Another application involves a coupled reactive transport model simulating the advancement of an acid plume in an acid mine drainage site associated with Bear Creek Uranium tailings pond. Geochemical reactions including calcite, gypsum, and illite were simulated with PHREEQC using the new kinetic database. The simulation results successfully demonstrated the utility of new kinetic database.
A spatial-temporal system for dynamic cadastral management.
Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie
2006-03-01
A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.
Aeroacoustic Measurements of a Wing/Slat Model
NASA Astrophysics Data System (ADS)
Mendoza, Jeff M.; Brooks, Thomas F.; Humphreys, William M.
2002-01-01
Aeroacoustic evaluations of high-lift devices have been carried out in the Quiet Flow Facility of the NASA Langley Research Center. The present paper deals with detailed flow and acoustic measurements that have been made to understand, and to possibly predict and reduce, the noise from a wing leading edge slat configuration. The acoustic database is obtained by a moveable Small Aperture Directional Array (SADA) of microphones designed to electronically steer to different portions of models under study. The slat is shown to be a uniform distributed noise source. The data was processed such that spectra and directivity were determined with respect to a one-foot span of slat. The spectra are normalized in various fashions to demonstrate slat noise character. In order to equate portions of the spectra to different slat noise components, trailing edge noise predictions using measured slat boundary layer parameters as inputs are compared to the measured slat noise spectra.
The Footprint Database and Web Services of the Herschel Space Observatory
NASA Astrophysics Data System (ADS)
Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba
2016-10-01
Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.
Andreo, Verónica; Glass, Gregory; Shields, Timothy; Provensal, Cecilia; Polop, Jaime
2011-09-01
We constructed a model to predict the potential distribution of Oligoryzomys longicaudatus, the reservoir of Andes virus (Genus: Hantavirus), in Argentina. We developed an extensive database of occurrence records from published studies and our own surveys and compared two methods to model the probability of O. longicaudatus presence; logistic regression and MaxEnt algorithm. The environmental variables used were tree, grass and bare soil cover from MODIS imagery and, altitude and 19 bioclimatic variables from WorldClim database. The models performances were evaluated and compared both by threshold dependent and independent measures. The best models included tree and grass cover, mean diurnal temperature range, and precipitation of the warmest and coldest seasons. The potential distribution maps for O. longicaudatus predicted the highest occurrence probabilities along the Andes range, from 32°S and narrowing southwards. They also predicted high probabilities for the south-central area of Argentina, reaching the Atlantic coast. The Hantavirus Pulmonary Syndrome cases coincided with mean occurrence probabilities of 95 and 77% for logistic and MaxEnt models, respectively. HPS transmission zones in Argentine Patagonia matched the areas with the highest probability of presence. Therefore, colilargos presence probability may provide an approximate risk of transmission and act as an early tool to guide control and prevention plans.
GraQL: A Query Language for High-Performance Attributed Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro
Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.
Patient satisfaction with nursing care: a concept analysis within a nursing framework.
Wagner, Debra; Bear, Mary
2009-03-01
This paper is a report of a concept analysis of patient satisfaction with nursing care. Patient satisfaction is an important indicator of quality of care, and healthcare facilities are interested in maintaining high levels of satisfaction in order to stay competitive in the healthcare market. Nursing care has a prominent role in patient satisfaction. Using a nursing model to measure patient satisfaction with nursing care helps define and clarify this concept. Rodgers' evolutionary method of concept analysis provided the framework for this analysis. Data were retrieved from the Cumulative Index of Nursing and Allied Health Literature and MEDLINE databases and the ABI/INFORM global business database. The literature search used the keywords patient satisfaction, nursing care and hospital. The sample included 44 papers published in English, between 1998 and 2007. Cox's Interaction Model of Client Health Behavior was used to analyse the concept of patient satisfaction with nursing care. The attributes leading to the health outcome of patient satisfaction with nursing care were categorized as affective support, health information, decisional control and professional/technical competencies. Antecedents embodied the uniqueness of the patient in terms of demographic data, social influence, previous healthcare experiences, environmental resources, intrinsic motivation, cognitive appraisal and affective response. Consequences of achieving patient satisfaction with nursing care included greater market share of healthcare finances, compliance with healthcare regimens and better health outcomes. The meaning of patient satisfaction continues to evolve. Using a nursing model to measure patient satisfaction with nursing care delineates the concept from other measures of patient satisfaction.
Trending in Probability of Collision Measurements
NASA Technical Reports Server (NTRS)
Vallejo, J. J.; Hejduk, M. D.; Stamey, J. D.
2015-01-01
A simple model is proposed to predict the behavior of Probabilities of Collision (P(sub c)) for conjunction events. The model attempts to predict the location and magnitude of the peak P(sub c) value for an event by assuming the progression of P(sub c) values can be modeled to first order by a downward-opening parabola. To incorporate prior information from a large database of past conjunctions, the Bayes paradigm is utilized; and the operating characteristics of the model are established through a large simulation study. Though the model is simple, it performs well in predicting the temporal location of the peak (P(sub c)) and thus shows promise as a decision aid in operational conjunction assessment risk analysis.
ERIC Educational Resources Information Center
Hsiung, Chin-Min; Zheng, Xiang-Xiang
2015-01-01
The Measurements for Team Functioning (MTF) database contains a series of student academic performance measurements obtained at a national university in Taiwan. The measurements are acquired from unit tests and homework tests performed during a core mechanical engineering course, and provide an objective means of assessing the functioning of…
Reflective Database Access Control
ERIC Educational Resources Information Center
Olson, Lars E.
2009-01-01
"Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…
New Data Bases and Standards for Gravity Anomalies
NASA Astrophysics Data System (ADS)
Keller, G. R.; Hildenbrand, T. G.; Webring, M. W.; Hinze, W. J.; Ravat, D.; Li, X.
2008-12-01
Ever since the use of high-precision gravimeters emerged in the 1950's, gravity surveys have been an important tool for geologic studies. Recent developments that make geologically useful measurements from airborne and satellite platforms, the ready availability of the Global Positioning System that provides precise vertical and horizontal control, improved global data bases, and the increased availability of processing and modeling software have accelerated the use of the gravity method. As a result, efforts are being made to improve the gravity databases publicly available to the geoscience community by expanding their holdings and increasing the accuracy and precision of the data in them. Specifically the North American Gravity Database as well as the individual databases of Canada, Mexico, and the United States are being revised using new formats and standards to improve their coverage, standardization, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account the enhanced computational power available, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining the different components of gravity anomalies. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the calculated value of theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies in that long wavelength artifacts are removed. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. Although many types of gravity anomalies have been described, they fall into three main classes. The primary class incorporates planetary effects, which are analytically prescribed, to derive the predicted or modeled gravity, and thus, anomalies of this class are termed planetary. The most primitive version of a gravity anomaly is simply the difference between the value of gravity predicted by the effect of the reference ellipsoid and the observed gravity anomaly. When the height of the gravity station increases, the ellipsoidal gravity anomaly decreases because of the increased distance of measurement from the anomaly- producing masses. The two primary anomalies in geophysics, which are appropriately classified as planetary anomalies, are the Free-air and Bouguer gravity anomalies. They employ models that account for planetary effects on gravity including the topography of the earth. A second class of anomaly, geological anomalies, includes the modeled gravity effect of known or assumed masses leading to the predicted gravity by using geological data such as densities and crustal thickness. The third class of anomaly, filtered anomalies, removes arbitrary gravity effects of largely unknown sources that are empirically or analytically determined from the nature of the gravity anomalies by filtering.
Characterization of natural ventilation in wastewater collection systems.
Ward, Matthew; Corsi, Richard; Morton, Robert; Knapp, Tom; Apgar, Dirk; Quigley, Chris; Easter, Chris; Witherspoon, Jay; Pramanik, Amit; Parker, Wayne
2011-03-01
The purpose of the study was to characterize natural ventilation in full-scale gravity collection system components while measuring other parameters related to ventilation. Experiments were completed at four different locations in the wastewater collection systems of Los Angeles County Sanitation Districts, Los Angeles, California, and the King County Wastewater Treatment District, Seattle, Washington. The subject components were concrete gravity pipes ranging in diameter from 0.8 to 2.4 m (33 to 96 in.). Air velocity was measured in each pipe using a carbon-monoxide pulse tracer method. Air velocity was measured entering or exiting the components at vents using a standpipe and hotwire anemometer arrangement. Ambient wind speed, temperature, and relative humidity; headspace temperature and relative humidity; and wastewater flow and temperature were measured. The field experiments resulted in a large database of measured ventilation and related parameters characterizing ventilation in full-scale gravity sewers. Measured ventilation rates ranged from 23 to 840 L/s. The experimental data was used to evaluate existing ventilation models. Three models that were based upon empirical extrapolation, computational fluid dynamics, and thermodynamics, respectively, were evaluated based on predictive accuracy compared to the measured data. Strengths and weaknesses in each model were found and these observations were used to propose a concept for an improved ventilation model.
Artificial neural network modelling of uncertainty in gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Dragović, S.; Onjia, A.; Stanković, S.; Aničin, I.; Bačić, G.
2005-03-01
An artificial neural network (ANN) model for the prediction of measuring uncertainties in gamma-ray spectrometry was developed and optimized. A three-layer feed-forward ANN with back-propagation learning algorithm was used to model uncertainties of measurement of activity levels of eight radionuclides ( 226Ra, 238U, 235U, 40K, 232Th, 134Cs, 137Cs and 7Be) in soil samples as a function of measurement time. It was shown that the neural network provides useful data even from small experimental databases. The performance of the optimized neural network was found to be very good, with correlation coefficients ( R2) between measured and predicted uncertainties ranging from 0.9050 to 0.9915. The correlation coefficients did not significantly deteriorate when the network was tested on samples with greatly different uranium-to-thorium ( 238U/ 232Th) ratios. The differences between measured and predicted uncertainties were not influenced by the absolute values of uncertainties of measured radionuclide activities. Once the ANN is trained, it could be employed in analyzing soil samples regardless of the 238U/ 232Th ratio. It was concluded that a considerable saving in time could be obtained using the trained neural network model for predicting the measurement times needed to attain the desired statistical accuracy.
Real-time emissions from construction equipment compared with model predictions.
Heidari, Bardia; Marr, Linsey C
2015-02-01
The construction industry is a large source of greenhouse gases and other air pollutants. Measuring and monitoring real-time emissions will provide practitioners with information to assess environmental impacts and improve the sustainability of construction. We employed a portable emission measurement system (PEMS) for real-time measurement of carbon dioxide (CO), nitrogen oxides (NOx), hydrocarbon, and carbon monoxide (CO) emissions from construction equipment to derive emission rates (mass of pollutant emitted per unit time) and emission factors (mass of pollutant emitted per unit volume of fuel consumed) under real-world operating conditions. Measurements were compared with emissions predicted by methodologies used in three models: NONROAD2008, OFFROAD2011, and a modal statistical model. Measured emission rates agreed with model predictions for some pieces of equipment but were up to 100 times lower for others. Much of the difference was driven by lower fuel consumption rates than predicted. Emission factors during idling and hauling were significantly different from each other and from those of other moving activities, such as digging and dumping. It appears that operating conditions introduce considerable variability in emission factors. Results of this research will aid researchers and practitioners in improving current emission estimation techniques, frameworks, and databases.
Data Base Design Using Entity-Relationship Models.
ERIC Educational Resources Information Center
Davis, Kathi Hogshead
1983-01-01
The entity-relationship (ER) approach to database design is defined, and a specific example of an ER model (personnel-payroll) is examined. The requirements for converting ER models into specific database management systems are discussed. (Author/MSE)
The IAGOS Information System: From the aircraft measurements to the users.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Thouret, Valérie; Cammas, Jean-Pierre; Petzold, Andreas; Volz-Thomas, Andreas; Gerbig, Christoph; Brenninkmeijer, Carl A. M.
2013-04-01
IAGOS (In-service Aircraft for a Global Observing System, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in-situ observations of atmospheric chemical composition throughout the troposphere and in the UTLS. It builds on almost 20 years of scientific and technological expertise gained in the research projects MOZAIC (Measurement of Ozone and Water Vapour on Airbus In-service Aircraft) and CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container). The European consortium includes research centres, universities, national weather services, airline operators and aviation industry. IAGOS consists of two complementary building blocks proving a unique global observation system: IAGOS-CORE deploys newly developed instrumentation for regular in-situ measurements of atmospheric chemical species both reactive and greenhouse gases (O3, CO, NOx, NOy, H2O, CO2, CH4), aerosols and cloud particles. In IAGOS-CARIBIC a cargo container is deployed monthly as a flying laboratory aboard one aircraft. Involved airlines ensure global operation of the network. Today, 5 aircraft are flying with the MOZAIC (3) or IAGOS-CORE (2) instrumentation namely 3 aircraft from Lufthansa, 1 from Air Namibia, and 1 from China Airlines Taiwan. A main improvement and new aspect of the IAGOS-CORE instrumentation compared to MOZAIC is to deliver the raw data in near real time (i.e. as soon as the aircraft lands data are transmitted). After a first and quick validation of the O3 and CO measurements, preliminary data are made available in the central database for both the MACC project (Monitoring Atmospheric Composition and Climate) and scientific research groups. In addition to recorded measurements, the database also contains added-value products such as meteorological information (tropopause height, air mass backtrajectories) and lagrangian model outputs (FLEXPART). Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web site: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The MOZAIC-IAGOS database contains today more than 35000 flights covering mostly the northern hemisphere mid-latitudes but with reduced representation of the Pacific region. The recently equipped China Airlines Taiwan aircraft started in July 2012 filling this gap. Future equipped aircraft scheduled in 2013 from Air France, Cathay Pacific and Iberia will cover the Asia-Oceania sector and Europe-South America transects. The database, as well as the research infrastructure itself are in continuous development and improvement. In the framework of the new starting IGAS project (IAGOS for GMES Atmospheric Service), major achievements will be reached such as metadata and formats standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC data integration within the central database, and the real-time data transmission.
Device, Algorithm and Integrated Modeling Research for Performance-Drive Multi-Modal Optical Sensors
2012-12-17
to!feature!aided!tracking! using !spectral! information .! ! !iii! •! A!novel!technique!for!spectral!waveband!selection!was!developed!and! used !as! part! of ... of !spectral! information ! using !the!tunable!single;pixel!spectrometer!concept.! •! A! database! was! developed! of ! spectral! reflectance! measurements...exploring! the! utility! of ! spectral! and! polarimetric! information !to!help!with!the!vehicle!tracking!application.!Through!the! use ! of ! both
NASA Technical Reports Server (NTRS)
Gaonkar, G. H.; Subramanian, S.
1996-01-01
Since the early 1990s the Aeroflightdynamics Directorate at the Ames Research Center has been conducting tests on isolated hingeless rotors in hover and forward flight. The primary objective is to generate a database on aeroelastic stability in trimmed flight for torsionally soft rotors at realistic tip speeds. The rotor test model has four soft inplane blades of NACA 0012 airfoil section with low torsional stiffness. The collective pitch and shaft tilt are set prior to each test run, and then the rotor is trimmed in the following sense: the longitudinal and lateral cyclic pitch controls are adjusted through a swashplate to minimize the 1/rev flapping moment at the 12 percent radial station. In hover, the database comprises lag regressive-mode damping with pitch variations. In forward flight the database comprises cyclic pitch controls, root flap moment and lag regressive-mode damping with advance ratio, shaft angle and pitch variations. This report presents the predictions and their correlation with the database. A modal analysis is used, in which nonrotating modes in flap bending, lag bending and torsion are computed from the measured blade mass and stiffness distributions. The airfoil aerodynamics is represented by the ONERA dynamic stall models of lift, drag and pitching moment, and the wake dynamics is represented by a state-space wake model. The trim analysis of finding, the cyclic controls and the corresponding, periodic responses is based on periodic shooting with damped Newton iteration; the Floquet transition matrix (FTM) comes out as a byproduct. The stabillty analysis of finding the frequencies and damping levels is based on the eigenvalue-eigenvector analysis of the FTM. All the structural and aerodynamic states are included from modeling to trim analysis. A major finding is that dynamic wake dramatically improves the correlation for the lateral cyclic pitch control. Overall, the correlation is fairly good.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
The methodology of database design in organization management systems
NASA Astrophysics Data System (ADS)
Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.
2017-01-01
The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.
Heterogeneous database integration in biomedicine.
Sujansky, W
2001-08-01
The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.
RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2007-01-01
Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253
Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza
2003-01-01
Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.
Liggett, Jacqueline; Sellbom, Martin
2018-06-21
The current study evaluated the continuity between the diagnostic operationalizations of obsessive-compulsive personality disorder (OCPD) in the Diagnostic and Statistical Manual for Mental Disorders, Fifth Edition, both as traditionally operationalized and from the perspective of the alternative model of personality disorders. Using both self-report and informant measures, the study had the following four aims: (a) to examine the extent to which self-report and informant data correspond, (b) to investigate whether both self-report and informant measures of the alternative model of OCPD can predict traditional OCPD, (c) to determine if any traits additional to those proposed in the alternative model of OCPD can predict traditional OCPD, and (d) to investigate whether a measure of OCPD-specific impairment is better at predicting traditional OCPD than are measures of general impairment in personality functioning. A mental health sample of 214 participants was recruited and administered measures of both the traditional and alternative models of OCPD. Self-report data moderately corresponded with informant data, which is consistent with the literature. Results further confirmed rigid perfectionism as the core trait of OCPD. Perseveration and workaholism were also associated with OCPD. Hostility was identified as a trait deserving further research. A measure of OCPD-specific impairment demonstrated its ability to incrementally predict OCPD over general measures of impairment. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Analysis of a Meteorological Database for London Heathrow in the Context of Wake Vortex Hazards
NASA Astrophysics Data System (ADS)
Agnew, P.; Ogden, D. J.; Hoad, D. J.
2003-04-01
A database of meteorological parameters collected by aircraft arriving at LHR has recently been compiled. We have used the recorded variation of temperature and wind with height to deduce the 'wake vortex behaviour class' (WVBC) along the glide slope, as experienced by each flight. The integrated state of the glide slope has been investigated, allowing us to estimate the proportion of time for which the wake vortex threat is reduced, due to either rapid decay or transport off the glide slope. A numerical weather prediction model was used to forecast the meteorological parameters for periods coinciding with the aircraft data. This allowed us to perform a comparison of forecast WVBC with those deduced from the aircraft measurements.
SIMS: addressing the problem of heterogeneity in databases
NASA Astrophysics Data System (ADS)
Arens, Yigal
1997-02-01
The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.
Assembling a biogenic hydrocarbon emissions inventory for the SCOS97-NARSTO modeling domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benjamin, M.T.; Winer, A.M.; Karlik, J.
1998-12-31
To assist in developing ozone control strategies for Southern California, the California Air Resources Board is developing a biogenic hydrocarbon (BHC) emissions inventory model for the SCOS97-NARSTO domain. The basis for this bottom-up model is SCOS97-NARSTO-specific landuse and landcover maps, leafmass constants, and BHC emission rates. In urban areas, landuse maps developed by the Southern California Association of Governments, San Diego Association of Governments, and other local governments are used while in natural areas, landcover and plant community databases produced by the GAP Analysis Project (GAP) are employed. Plant identities and canopy volumes for species in each landuse and landcovermore » category are based on the most recent botanical field survey data. Where possible, experimentally determined leafmass constant and BHC emission rate measurements reported in the literature are used or, for those species where experimental data are not available, values are assigned based on taxonomic methods. A geographic information system is being used to integrate these databases, as well as the most recent environmental correction algorithms and canopy shading factors, to produce a spatially- and temporally-resolved BHC emission inventory suitable for input into the Urban Airshed Model.« less
NASA Astrophysics Data System (ADS)
Spansky, M. C.; Hyndman, D. W.; Long, D. T.; Pijanowski, B. C.
2004-05-01
Regional inputs of non-point source pollutants to groundwater, such as agriculturally-derived nitrate, have typically proven difficult to model due to sparse concentration data and complex system dynamics. We present an approach to evaluate the relative contribution of various land use types to groundwater nitrate across a regional Michigan watershed using groundwater flow and transport models. The models were parameterized based on land use data, and calibrated to a 20 year database of nitrate measured in drinking water wells. The database spans 1983-2003 and contains approximately 27,000 nitrate records for the five major counties encompassed by the watershed. The Grand Traverse Bay Watershed (GTBW), located in the northwest Lower Peninsula of Michigan, was chosen for this research. Groundwater flow and nitrate transport models were developed for the GTBW using MODFLOW2000 and RT3D, respectively. In a preliminary transport model, agricultural land uses were defined as the sole source of groundwater nitrate. Nitrate inputs were then refined to reflect variations in nitrogen loading rates for different agriculture types, including orchards, row crops, and pastureland. The calibration dataset was created by assigning spatial coordinates to each water well sample using address matching from a geographic information system (GIS). Preliminary results show that there is a significant link between agricultural sources and measured groundwater nitrate concentrations. In cases where observed concentrations remain significantly higher than simulated values, other sources of nitrate (e.g. septic tanks or abandoned agricultural fields) will be evaluated. This research will eventually incorporate temporal variations in fertilizer application rates and changing land use patterns to better represent fluid and solute fluxes at a regional scale.
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
Zhang, Liming; Yu, Dongsheng; Shi, Xuezheng; Xu, Shengxiang; Xing, Shihe; Zhao, Yongcong
2014-01-01
Soil organic carbon (SOC) models were often applied to regions with high heterogeneity, but limited spatially differentiated soil information and simulation unit resolution. This study, carried out in the Tai-Lake region of China, defined the uncertainty derived from application of the DeNitrification-DeComposition (DNDC) biogeochemical model in an area with heterogeneous soil properties and different simulation units. Three different resolution soil attribute databases, a polygonal capture of mapping units at 1∶50,000 (P5), a county-based database of 1∶50,000 (C5) and county-based database of 1∶14,000,000 (C14), were used as inputs for regional DNDC simulation. The P5 and C5 databases were combined with the 1∶50,000 digital soil map, which is the most detailed soil database for the Tai-Lake region. The C14 database was combined with 1∶14,000,000 digital soil map, which is a coarse database and is often used for modeling at a national or regional scale in China. The soil polygons of P5 database and county boundaries of C5 and C14 databases were used as basic simulation units. Results project that from 1982 to 2000, total SOC change in the top layer (0–30 cm) of the 2.3 M ha of paddy soil in the Tai-Lake region was +1.48 Tg C, −3.99 Tg C and −15.38 Tg C based on P5, C5 and C14 databases, respectively. With the total SOC change as modeled with P5 inputs as the baseline, which is the advantages of using detailed, polygon-based soil dataset, the relative deviation of C5 and C14 were 368% and 1126%, respectively. The comparison illustrates that DNDC simulation is strongly influenced by choice of fundamental geographic resolution as well as input soil attribute detail. The results also indicate that improving the framework of DNDC is essential in creating accurate models of the soil carbon cycle. PMID:24523922
Kim, Joongheon; Kim, Jong-Kook
2016-01-01
This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.
Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance
NASA Technical Reports Server (NTRS)
Orme, John S.; Hathaway, Ross; Ferguson, Michael D.
1998-01-01
A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.
Very large database of lipids: rationale and design.
Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R
2013-11-01
Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Khan, A.; Shankland, T. J.
2012-02-01
This paper applies electromagnetic sounding methods for Earth's mantle to constrain its thermal state, chemical composition, and "water" content. We consider long-period inductive response functions in the form of C-responses from four stations distributed across the Earth (Europe, North America, Asia and Australia) covering a period range from 3.9 to 95.2 days and sensitivity to ~ 1200 km depth. We invert C-responses directly for thermo-chemical state using a self-consistent thermodynamic method that computes phase equilibria as functions of pressure, temperature, and composition (in the Na2O-CaO-FeO-MgO-Al2O3-SiO2 model system). Computed mineral modes are combined with recent laboratory-based electrical conductivity models from independent experimental research groups (Yoshino (2010) and Karato (2011)) to compute bulk conductivity structure beneath each of the four stations from which C-responses are estimated. To reliably allocate water between the various mineral phases we include laboratory-measured water partition coefficients for major upper mantle and transition zone minerals. This scheme is interfaced with a sampling-based algorithm to solve the resulting non-linear inverse problem. This approach has two advantages: (1) It anchors temperatures, composition, electrical conductivities, and discontinuities that are in laboratory-based forward models, and (2) At the same time it permits the use of geophysical inverse methods to optimize conductivity profiles to match geophysical data. The results show lateral variations in upper mantle temperatures beneath the four stations that appear to persist throughout the upper mantle and parts of the transition zone. Calculated mantle temperatures at 410 and 660 km depth lie in the range 1250-1650 °C and 1500-1750 °C, respectively, and generally agree with the experimentally-determined temperatures at which the measured phase reactions olivine → β-spinel and γ-spinel → ferropericlase + perovskite occur. The retrieved conductivity structures beneath the various stations tend to follow trends observed for temperature with the strongest lateral variations in the uppermost mantle; for depths > 300 km conductivities appear to depend less on the particular conductivity database. Conductivities at 410 km and at 660 km depth are found to agree overall with purely geophysically-derived global and semi-global one-dimensional conductivity models. Both electrical conductivity databases point to < 0.01 wt.% H2O in the upper mantle. For transition zone minerals results from the laboratory database of Yoshino (2010) suggest that a much higher water content (up to 2 wt.% H2O) is required than in the other database (Karato, 2011), which favors a relatively "dry" transition zone (< 0.01 wt.% H2O). Incorporating laboratory measurements of hydrous silicate melting relations and available conductivity data allows us to consider the possibility of hydration melting and a high-conductivity melt layer above the 410-km discontinuity. The latter appears to be 1) regionally localized and 2) principally a feature from the Yoshino (2010) database. Further, there is evidence of lateral heterogeneity: The mantle beneath southwestern North America and central China appears "wetter" than that beneath central Europe or Australia.
Gruginskie, Lúcia Adriana Dos Santos; Vaccaro, Guilherme Luís Roehe
2018-01-01
The quality of the judicial system of a country can be verified by the overall length time of lawsuits, or the lead time. When the lead time is excessive, a country's economy can be affected, leading to the adoption of measures such as the creation of the Saturn Center in Europe. Although there are performance indicators to measure the lead time of lawsuits, the analysis and the fit of prediction models are still underdeveloped themes in the literature. To contribute to this subject, this article compares different prediction models according to their accuracy, sensitivity, specificity, precision, and F1 measure. The database used was from TRF4-the Tribunal Regional Federal da 4a Região-a federal court in southern Brazil, corresponding to the 2nd Instance civil lawsuits completed in 2016. The models were fitted using support vector machine, naive Bayes, random forests, and neural network approaches with categorical predictor variables. The lead time of the 2nd Instance judgment was selected as the response variable measured in days and categorized in bands. The comparison among the models showed that the support vector machine and random forest approaches produced measurements that were superior to those of the other models. The evaluation of the models was made using k-fold cross-validation similar to that applied to the test models.
Human Thermal Model Evaluation Using the JSC Human Thermal Database
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2012-01-01
Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.
Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints
ERIC Educational Resources Information Center
Philip, George C.
2007-01-01
This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…
MaizeGDB update: New tools, data, and interface for the maize model organism database
USDA-ARS?s Scientific Manuscript database
MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...
Exploring human disease using the Rat Genome Database.
Shimoyama, Mary; Laulederkind, Stanley J F; De Pons, Jeff; Nigam, Rajni; Smith, Jennifer R; Tutaj, Marek; Petri, Victoria; Hayman, G Thomas; Wang, Shur-Jen; Ghiasvand, Omid; Thota, Jyothi; Dwinell, Melinda R
2016-10-01
Rattus norvegicus, the laboratory rat, has been a crucial model for studies of the environmental and genetic factors associated with human diseases for over 150 years. It is the primary model organism for toxicology and pharmacology studies, and has features that make it the model of choice in many complex-disease studies. Since 1999, the Rat Genome Database (RGD; http://rgd.mcw.edu) has been the premier resource for genomic, genetic, phenotype and strain data for the laboratory rat. The primary role of RGD is to curate rat data and validate orthologous relationships with human and mouse genes, and make these data available for incorporation into other major databases such as NCBI, Ensembl and UniProt. RGD also provides official nomenclature for rat genes, quantitative trait loci, strains and genetic markers, as well as unique identifiers. The RGD team adds enormous value to these basic data elements through functional and disease annotations, the analysis and visual presentation of pathways, and the integration of phenotype measurement data for strains used as disease models. Because much of the rat research community focuses on understanding human diseases, RGD provides a number of datasets and software tools that allow users to easily explore and make disease-related connections among these datasets. RGD also provides comprehensive human and mouse data for comparative purposes, illustrating the value of the rat in translational research. This article introduces RGD and its suite of tools and datasets to researchers - within and beyond the rat community - who are particularly interested in leveraging rat-based insights to understand human diseases. © 2016. Published by The Company of Biologists Ltd.
IDEOS: Fitting Infrared Spectra from Dusty Galaxies
NASA Astrophysics Data System (ADS)
Viola, Vincent; Rupke, D.
2014-01-01
We fit models to heavily obscured infrared spectra taken by the Spitzer Space Telescope and prepare them for cataloguing in the Infrared Database of Extragalactic Observables from Spitzer (IDEOS). When completed, IDEOS will contain homogeneously measured mid-infrared spectroscopic observables of more than 4200 galaxies beyond the Local Group. The software we use, QUESTFit, models the spectra using up to three extincted blackbodies (including silicate, water ice, and hydrocarbon absorption) and PAH templates. We present results from a sample of the approximately 200 heavily obscured spectra that will be present in IDEOS.
Carrara, Marta; Carozzi, Luca; Moss, Travis J; de Pasquale, Marco; Cerutti, Sergio; Lake, Douglas E; Moorman, J Randall; Ferrario, Manuela
2015-01-01
Identification of atrial fibrillation (AF) is a clinical imperative. Heartbeat interval time series are increasingly available from personal monitors, allowing new opportunity for AF diagnosis. Previously, we devised numerical algorithms for identification of normal sinus rhythm (NSR), AF, and SR with frequent ectopy using dynamical measures of heart rate. Here, we wished to validate them in the canonical MIT-BIH ECG databases. We tested algorithms on the NSR, AF and arrhythmia databases. When the databases were combined, the positive predictive value of the new algorithms exceeded 95% for NSR and AF, and was 40% for SR with ectopy. Further, dynamical measures did not distinguish atrial from ventricular ectopy. Inspection of individual 24hour records showed good correlation of observed and predicted rhythms. Heart rate dynamical measures are effective ingredients in numerical algorithms to classify cardiac rhythm from the heartbeat intervals time series alone. Copyright © 2015 Elsevier Inc. All rights reserved.
A modeling approach for aerosol optical depth analysis during forest fire events
NASA Astrophysics Data System (ADS)
Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David
2004-10-01
Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.
An Experimental and Numerical Study of a Supersonic Burner for CFD Model Development
NASA Technical Reports Server (NTRS)
Magnotti, G.; Cutler, A. D.
2008-01-01
A laboratory scale supersonic burner has been developed for validation of computational fluid dynamics models. Detailed numerical simulations were performed for the flow inside the combustor, and coupled with finite element thermal analysis to obtain more accurate outflow conditions. A database of nozzle exit profiles for a wide range of conditions of interest was generated to be used as boundary conditions for simulation of the external jet, or for validation of non-intrusive measurement techniques. A set of experiments was performed to validate the numerical results. In particular, temperature measurements obtained by using an infrared camera show that the computed heat transfer was larger than the measured value. Relaminarization in the convergent part of the nozzle was found to be responsible for this discrepancy, and further numerical simulations sustained this conclusion.
Modeling Powered Aerodynamics for the Orion Launch Abort Vehicle Aerodynamic Database
NASA Technical Reports Server (NTRS)
Chan, David T.; Walker, Eric L.; Robinson, Philip E.; Wilson, Thomas M.
2011-01-01
Modeling the aerodynamics of the Orion Launch Abort Vehicle (LAV) has presented many technical challenges to the developers of the Orion aerodynamic database. During a launch abort event, the aerodynamic environment around the LAV is very complex as multiple solid rocket plumes interact with each other and the vehicle. It is further complicated by vehicle separation events such as between the LAV and the launch vehicle stack or between the launch abort tower and the crew module. The aerodynamic database for the LAV was developed mainly from wind tunnel tests involving powered jet simulations of the rocket exhaust plumes, supported by computational fluid dynamic simulations. However, limitations in both methods have made it difficult to properly capture the aerodynamics of the LAV in experimental and numerical simulations. These limitations have also influenced decisions regarding the modeling and structure of the aerodynamic database for the LAV and led to compromises and creative solutions. Two database modeling approaches are presented in this paper (incremental aerodynamics and total aerodynamics), with examples showing strengths and weaknesses of each approach. In addition, the unique problems presented to the database developers by the large data space required for modeling a launch abort event illustrate the complexities of working with multi-dimensional data.
Aerosol Remote Sensing from AERONET, the Ground-Based Satellite
NASA Technical Reports Server (NTRS)
Holben, Brent N.
2012-01-01
Atmospheric particles including mineral dust, biomass burning smoke, pollution from carbonaceous aerosols and sulfates, sea salt, impact air quality and climate. The Aerosol Robotic Network (AERONET) program, established in the early 1990s, is a federation of ground-based remote sensing aerosol networks of Sun/sky radiometers distributed around the world, which provides a long-term, continuous and readily accessible public domain database of aerosol optical (e.g., aerosol optical depth) and microphysical (e.g., aerosol volume size distribution) properties for aerosol characterization, validation of satellite retrievals, and synergism with Earth science databases. Climatological aerosol properties will be presented at key worldwide locations exhibiting discrete dominant aerosol types. Further, AERONET's temporary mesoscale network campaign (e.g., UAE2, TIGERZ, DRAGON-USA.) results that attempt to quantify spatial and temporal variability of aerosol properties, establish validation of ground-based aerosol retrievals using aircraft profile measurements, and measure aerosol properties on compatible spatial scales with satellite retrievals and aerosol transport models allowing for more robust validation will be discussed.
Mineau, Mineau P; Gilda, Garibotti; Kerber, Richard
2014-01-01
We examine how key early family circumstances affect mortality risks decades later. Early life conditions are measured by parental mortality, parental fertility (e.g., offspring sibship size, parental age at offspring birth), religious upbringing, and parental socioeconomic status. Prior to these early life conditions are familial and genetic factors that affect life-span. Accordingly, we consider the role of parental and familial longevity on adult mortality risks. We analyze the large Utah Population Database which contains a vast amount of genealogical and other vital/health data that contain full life histories of individuals and hundreds of their relatives. To control for unobserved heterogeneity, we analyze sib-pair data for 12,000 sib-pairs using frailty models. We found modest effects of key childhood conditions (birth order, sibship size, parental religiosity, parental SES, and parental death in childhood). Our measures of familial aggregation of longevity were large and suggest an alternative view of early life conditions. PMID:19278766
Imprecision and Uncertainty in the UFO Database Model.
ERIC Educational Resources Information Center
Van Gyseghem, Nancy; De Caluwe, Rita
1998-01-01
Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…
On the Reliability of Photovoltaic Short-Circuit Current Temperature Coefficient Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterwald, Carl R.; Campanelli, Mark; Kelly, George J.
2015-06-14
The changes in short-circuit current of photovoltaic (PV) cells and modules with temperature are routinely modeled through a single parameter, the temperature coefficient (TC). This parameter is vital for the translation equations used in system sizing, yet in practice is very difficult to measure. In this paper, we discuss these inherent problems and demonstrate how they can introduce unacceptably large errors in PV ratings. A method for quantifying the spectral dependence of TCs is derived, and then used to demonstrate that databases of module parameters commonly contain values that are physically unreasonable. Possible ways to reduce measurement errors are alsomore » discussed.« less
Turbulent Mixing of Primary and Secondary Flow Streams in a Rocket-Based Combined Cycle Engine
NASA Technical Reports Server (NTRS)
Cramer, J. M.; Greene, M. U.; Pal, S.; Santoro, R. J.; Turner, Jim (Technical Monitor)
2002-01-01
This viewgraph presentation gives an overview of the turbulent mixing of primary and secondary flow streams in a rocket-based combined cycle (RBCC) engine. A significant RBCC ejector mode database has been generated, detailing single and twin thruster configurations and global and local measurements. On-going analysis and correlation efforts include Marshall Space Flight Center computational fluid dynamics modeling and turbulent shear layer analysis. Potential follow-on activities include detailed measurements of air flow static pressure and velocity profiles, investigations into other thruster spacing configurations, performing a fundamental shear layer mixing study, and demonstrating single-shot Raman measurements.
Fujimura, Tomomi; Umemura, Hiroyuki
2018-01-15
The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .
NASA Astrophysics Data System (ADS)
Kim, Duk-hyun; Lee, Hyoung-Jin
2018-04-01
A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.
Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge
2017-05-04
Thermophysical properties of a carbon-reinforced epoxy composite laminate (T700/M21 composite for aircraft structures) were evaluated using different innovative characterisation methods. Thermogravimetric Analysis (TGA), Simultaneous Thermal analysis (STA), Laser Flash analysis (LFA), and Fourier Transform Infrared (FTIR) analysis were used for measuring the thermal decomposition, the specific heat capacity, the anisotropic thermal conductivity of the composite, the heats of decomposition and the specific heat capacity of released gases. It permits to get input data to feed a three-dimensional (3D) model given the temperature profile and the mass loss obtained during well-defined fire scenarios (model presented in Part II of this paper). The measurements were optimised to get accurate data. The data also permit to create a public database on an aeronautical carbon fibre/epoxy composite for fire safety engineering.
Liu, Zhijian; Li, Hao; Cao, Guoqing
2017-01-01
Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria) usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10), temperature, relative humidity, and CO2 concentration. Our results show that a general regression neural network (GRNN) model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups. PMID:28758941
A knowledge based search tool for performance measures in health care systems.
Beyan, Oya D; Baykal, Nazife
2012-02-01
Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.