NASA Astrophysics Data System (ADS)
Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin
2017-06-01
Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.
Obs4MIPS: Satellite Observations for Model Evaluation
NASA Astrophysics Data System (ADS)
Ferraro, R.; Waliser, D. E.; Gleckler, P. J.
2017-12-01
This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model
NASA Astrophysics Data System (ADS)
Fu, Li Fang; Meng, Jun; Liu, Ying
2015-12-01
Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.
A two-stage DEA approach for environmental efficiency measurement.
Song, Malin; Wang, Shuhong; Liu, Wei
2014-05-01
The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Using Optimization to Improve Test Planning
2017-09-01
friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool
Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
2017-09-10
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.
2011-01-01
The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.
Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven
2014-01-01
The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.
Evaluation of simulated ocean carbon in the CMIP5 earth system models
NASA Astrophysics Data System (ADS)
Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent
2013-04-01
We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek
2017-01-01
We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...
User assessment of smoke-dispersion models for wildland biomass burning.
Steve Breyfogle; Sue A. Ferguson
1996-01-01
Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...
A spectral method for spatial downscaling | Science Inventory ...
Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch
NASA Astrophysics Data System (ADS)
Heinze, Rieke; Moseley, Christopher; Böske, Lennart Nils; Muppa, Shravan Kumar; Maurer, Vera; Raasch, Siegfried; Stevens, Bjorn
2017-06-01
Large-eddy simulations (LESs) of a multi-week period during the HD(CP)2 (High-Definition Clouds and Precipitation for advancing Climate Prediction) Observational Prototype Experiment (HOPE) conducted in Germany are evaluated with respect to mean boundary layer quantities and turbulence statistics. Two LES models are used in a semi-idealized setup through forcing with mesoscale model output to account for the synoptic-scale conditions. Evaluation is performed based on the HOPE observations. The mean boundary layer characteristics like the boundary layer depth are in a principal agreement with observations. Simulating shallow-cumulus layers in agreement with the measurements poses a challenge for both LES models. Variance profiles agree satisfactorily with lidar measurements. The results depend on how the forcing data stemming from mesoscale model output are constructed. The mean boundary layer characteristics become less sensitive if the averaging domain for the forcing is large enough to filter out mesoscale fluctuations.
Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation
2018-01-01
ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model......Do not return it to the originator. ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory
The Use of AMET and Automated Scripts for Model Evaluation
The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less
Optimum systems design with random input and output applied to solar water heating
NASA Astrophysics Data System (ADS)
Abdel-Malek, L. L.
1980-03-01
Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.
Evaluating digital libraries in the health sector. Part 1: measuring inputs and outputs.
Cullen, Rowena
2003-12-01
This is the first part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. In this first part, some approaches to evaluation that have been proposed for mainstream digital information services are examined for their suitability to provide models for the health sector. The paper summarizes some major national and collaborative initiatives to develop measures for digital libraries, and analyses these approaches in terms of their relationship to traditional measures of library performance, which are focused on inputs and outputs, and their relevance to current debates among health information specialists. The second part* looks more specifically at evaluative models based on outcomes, and models being developed in the health sector.
Application of Wavelet Filters in an Evaluation of ...
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu
NASA Technical Reports Server (NTRS)
Rockey, D. E.
1979-01-01
A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.
Quantitative Decision Support Requires Quantitative User Guidance
NASA Astrophysics Data System (ADS)
Smith, L. A.
2009-12-01
Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output for a given problem is presented. Based on climate science, meteorology, and the details of the question in hand, this approach identifies necessary (never sufficient) conditions required for the rational use of climate model output in quantitative decision support tools. Inasmuch as climate forecasting is a problem of extrapolation, there will always be harsh limits on our ability to establish where a model is fit for purpose, this does not, however, limit us from identifying model noise as such, and thereby avoiding some cases of the misapplication and over interpretation of model output. It is suggested that failure to clearly communicate the limits of today’s climate model in providing quantitative decision relevant climate information to today’s users of climate information, would risk the credibility of tomorrow’s climate science and science based policy more generally.
Pohjola, Mikko V; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T
2013-06-26
The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge.
Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.
Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun
2017-10-01
To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Application of Wavelet Filters in an Evaluation of Photochemical Model Performance
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
SENSITIVE PARAMETER EVALUATION FOR A VADOSE ZONE FATE AND TRANSPORT MODEL
This report presents information pertaining to quantitative evaluation of the potential impact of selected parameters on output of vadose zone transport and fate models used to describe the behavior of hazardous chemicals in soil. The Vadose 2one Interactive Processes (VIP) model...
London, Michael; Larkum, Matthew E; Häusser, Michael
2008-11-01
Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei
2014-07-20
We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.
More than ten state-of-the-art regional air quality models have been applied as part of the Air Quality Model Evaluation International Initiative (AQMEII). These models were run by twenty independent groups in Europe and North America. Standardised modelling outputs over a full y...
The Impact of Spatial Correlation and Incommensurability on Model Evaluation
Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario
Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less
Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping
2011-06-01
It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less
Evaluation of a Postdischarge Call System Using the Logic Model.
Frye, Timothy C; Poe, Terri L; Wilson, Marisa L; Milligan, Gary
2018-02-01
This mixed-method study was conducted to evaluate a postdischarge call program for congestive heart failure patients at a major teaching hospital in the southeastern United States. The program was implemented based on the premise that it would improve patient outcomes and overall quality of life, but it had never been evaluated for effectiveness. The Logic Model was used to evaluate the input of key staff members to determine whether the outputs and results of the program matched the expectations of the organization. Interviews, online surveys, reviews of existing patient outcome data, and reviews of publicly available program marketing materials were used to ascertain current program output. After analyzing both qualitative and quantitative data from the evaluation, recommendations were made to the organization to improve the effectiveness of the program.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
NASA Technical Reports Server (NTRS)
Chang, H.
1976-01-01
A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.
Pohjola, Mikko V.; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T.
2013-01-01
The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge. PMID:23803642
Predicting High-Power Performance in Professional Cyclists.
Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K
2017-03-01
To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
NASA Astrophysics Data System (ADS)
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Thermal and optical performance of encapsulation systems for flat-plate photovoltaic modules
NASA Technical Reports Server (NTRS)
Minning, C. P.; Coakley, J. F.; Perrygo, C. M.; Garcia, A., III; Cuddihy, E. F.
1981-01-01
The electrical power output from a photovoltaic module is strongly influenced by the thermal and optical characteristics of the module encapsulation system. Described are the methodology and computer model for performing fast and accurate thermal and optical evaluations of different encapsulation systems. The computer model is used to evaluate cell temperature, solar energy transmittance through the encapsulation system, and electric power output for operation in a terrestrial environment. Extensive results are presented for both superstrate-module and substrate-module design schemes which include different types of silicon cell materials, pottants, and antireflection coatings.
DOT National Transportation Integrated Search
2016-07-13
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
NASA Technical Reports Server (NTRS)
Trachta, G.
1976-01-01
A model of Univac 1108 work flow has been developed to assist in performance evaluation studies and configuration planning. Workload profiles and system configurations are parameterized for ease of experimental modification. Outputs include capacity estimates and performance evaluation functions. The U1108 system is conceptualized as a service network; classical queueing theory is used to evaluate network dynamics.
DOT National Transportation Integrated Search
2017-02-02
The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
Evaluating Multi-Input/Multi-Output Digital Control Systems
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek
1994-01-01
Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reisenauer, A.E.
1979-12-01
A system of computer codes to aid in the preparation and evaluation of ground-water model input, as well as in the computer codes and auxillary programs developed and adapted for use in modeling major ground-water aquifers is described. The ground-water model is interactive, rather than a batch-type model. Interactive models have been demonstrated to be superior to batch in the ground-water field. For example, looking through reams of numerical lists can be avoided with the much superior graphical output forms or summary type numerical output. The system of computer codes permits the flexibility to develop rapidly the model-required data filesmore » from engineering data and geologic maps, as well as efficiently manipulating the voluminous data generated. Central to these codes is the Ground-water Model, which given the boundary value problem, produces either the steady-state or transient time plane solutions. A sizeable part of the codes available provide rapid evaluation of the results. Besides contouring the new water potentials, the model allows graphical review of streamlines of flow, travel times, and detailed comparisons of surfaces or points at designated wells. Use of the graphics scopes provide immediate, but temporary displays which can be used for evaluation of input and output and which can be reproduced easily on hard copy devices, such as a line printer, Calcomp plotter and image photographs.« less
An Evaluation Concept for Audiovisual Activities in the Department of Defense.
ERIC Educational Resources Information Center
Main, Robert G.
The DAVA (Directorate for Audiovisual Activities) evaluation model was developed for the U.S. Department of Defense to generate studies, decision models, standards, and directives, with outputs coordinated by the military departments that implement the decisions through the major commands and down to the installation level. The 3-level model is…
Logic Models: A Tool for Designing and Monitoring Program Evaluations. REL 2014-007
ERIC Educational Resources Information Center
Lawton, Brian; Brandon, Paul R.; Cicchinelli, Louis; Kekahio, Wendy
2014-01-01
introduction to logic models as a tool for designing program evaluations defines the major components of education programs--resources, activities, outputs, and short-, mid-, and long-term outcomes--and uses an example to demonstrate the relationships among them. This quick…
Al Shaafi, Mm; Maawadh, Am; Al Qahtani, Mq
2011-01-01
The purpose of this study was to evaluate the light intensity output of quartz-tungsten-halogen (QTH) and light emitting diode (LED) curing devices located at governmental health institutions in Riyadh, Saudi Arabia.Eight governmental institutions were involved in the study. The total number of evaluated curing devices was 210 (120 were QTH and 90 were LED). The reading of the light intensity output for each curing unit was achieved using a digital spectrometer; (Model USB4000 Spectrometer, Ocean Optics Inc, Dunedin, FL, USA). The reading procedure was performed by a single investigator; any recording of light intensity below 300 mW/cm2 was considered unsatisfactory.The result found that the recorded mean values of light intensity output for QTH and LED devices were 260 mW/cm2 and 598 mW/cm2, respectively. The percentage of QTH devices and LED devices considered unsatisfactory was 67.5% and 15.6%, respectively. Overall, the regular assessment of light curing devices using light meters is recommended to assure adequate output for clinical use.
NASA Astrophysics Data System (ADS)
Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald
2017-04-01
The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.
Currently used dispersion models, such as the AMS/EPA Regulatory Model (AERMOD), process routinely available meteorological observations to construct model inputs. Thus, model estimates of concentrations depend on the availability and quality of Meteorological observations, as we...
NASA Technical Reports Server (NTRS)
Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek
1991-01-01
Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.
Direct model reference adaptive control with application to flexible robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.
1992-01-01
A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.
NASA Astrophysics Data System (ADS)
Elsayed, Ayman; Shabaan Khalil, Nabil
2017-10-01
The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.
NASA Astrophysics Data System (ADS)
Sun, Lianming; Sano, Akira
Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.
Input-output characterization of fiber reinforced composites by P waves
NASA Technical Reports Server (NTRS)
Renneisen, John D.; Williams, James H., Jr.
1990-01-01
Input-output characterization of fiber composites is studied theoretically by tracing P waves in the media. A new path motion to aid in the tracing of P and the reflection generated SV wave paths in the continuum plate is developed. A theoretical output voltage from the receiving transducer is calculated for a tone burst. The study enhances the quantitative and qualitative understanding of the nondestructive evaluation of fiber composites which can be modeled as transversely isotropic media.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
USEEIO: a New and Transparent United States ...
National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f
Land surface Verification Toolkit (LVT)
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.
2017-01-01
LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Solid rocket booster performance evaluation model. Volume 2: Users manual
NASA Technical Reports Server (NTRS)
1974-01-01
This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.; ...
2017-11-27
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles
Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.
2009-01-01
The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382
Analytic model for academic research productivity having factors, interactions and implications
2011-01-01
Financial support is dear in academia and will tighten further. How can the research mission be accomplished within new restraints? A model is presented for evaluating source components of academic research productivity. It comprises six factors: funding; investigator quality; efficiency of the research institution; the research mix of novelty, incremental advancement, and confirmatory studies; analytic accuracy; and passion. Their interactions produce output and patterned influences between factors. Strategies for optimizing output are enabled. PMID:22130145
Quantification of downscaled precipitation uncertainties via Bayesian inference
NASA Astrophysics Data System (ADS)
Nury, A. H.; Sharma, A.; Marshall, L. A.
2017-12-01
Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher
2002-12-01
Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.
Study of Regional Downscaled Climate and Air Quality in the United States
NASA Astrophysics Data System (ADS)
Gao, Y.; Fu, J. S.; Drake, J.; Lamarque, J.; Lam, Y.; Huang, K.
2011-12-01
Due to the increasing anthropogenic greenhouse gas emissions, the global and regional climate patterns have significantly changed. Climate change has exerted strong impact on ecosystem, air quality and human life. The global model Community Earth System Model (CESM v1.0) was used to predict future climate and chemistry under projected emission scenarios. Two new emission scenarios, Representative Community Pathways (RCP) 4.5 and RCP 8.5, were used in this study for climate and chemistry simulations. The projected global mean temperature will increase 1.2 and 1.7 degree Celcius for the RCP 4.5 and RCP 8.5 scenarios in 2050s, respectively. In order to take advantage of local detailed topography, land use data and conduct local climate impact on air quality, we downscaled CESM outputs to 4 km by 4 km Eastern US domain using Weather Research and Forecasting (WRF) Model and Community Multi-scale Air Quality modeling system (CMAQ). The evaluations between regional model outputs and global model outputs, regional model outputs and observational data were conducted to verify the downscaled methodology. Future climate change and air quality impact were also examined on a 4 km by 4 km high resolution scale.
Richard S. Holthausen; Michael J. Wisdom; John Pierce; Daniel K. Edwards; Mary M. Rowland
1994-01-01
We used expert opinion to evaluate the predictive reliability of a habitat effectiveness model for elk in western Oregon and Washington. Twenty-five experts in elk ecology were asked to rate habitat quality for 16 example landscapes. Rankings and ratings of 21 experts were significantly correlated with model output. Expert opinion and model predictions differed for 4...
ERIC Educational Resources Information Center
Ozbek, Cigdem; Comoglu, Irem; Baran, Bahar
2017-01-01
This study aims to design of the two activities "introducing an innovation" and "role playing" in Second Life (SL) and to evaluate qualitatively Turkish foreign language learner's roles and outputs before, while, and after the implementation of the activities. The study used community of inquiry model consisting of cognitive…
The Correlation of Human Capital on Costs of Air Force Acquisition Programs
2009-03-01
6.78 so our model does not exhibit the presence of multi-collinearity. We empirically tested for heteroskedasticity using the Breusch - Pagan -Godfrey...inputs to outputs. The output in this study is the average cost overrun of Aeronautical Systems Center research, development, test , and evaluation...32 Pre-Estimation Specification Tests ............................................................................34 Post
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
Evaluation of the power consumption of a high-speed parallel robot
NASA Astrophysics Data System (ADS)
Han, Gang; Xie, Fugui; Liu, Xin-Jun
2018-06-01
An inverse dynamic model of a high-speed parallel robot is established based on the virtual work principle. With this dynamic model, a new evaluation method is proposed to measure the power consumption of the robot during pick-and-place tasks. The power vector is extended in this method and used to represent the collinear velocity and acceleration of the moving platform. Afterward, several dynamic performance indices, which are homogenous and possess obvious physical meanings, are proposed. These indices can evaluate the power input and output transmissibility of the robot in a workspace. The distributions of the power input and output transmissibility of the high-speed parallel robot are derived with these indices and clearly illustrated in atlases. Furtherly, a low-power-consumption workspace is selected for the robot.
Designing an evaluation framework for WFME basic standards for medical education.
Tackett, Sean; Grant, Janet; Mmari, Kristin
2016-01-01
To create an evaluation plan for the World Federation for Medical Education (WFME) accreditation standards for basic medical education. We conceptualized the 100 basic standards from "Basic Medical Education: WFME Global Standards for Quality Improvement: The 2012 Revision" as medical education program objectives. Standards were simplified into evaluable items, which were then categorized as inputs, processes, outputs and/or outcomes to generate a logic model and corresponding plan for data collection. WFME standards posed significant challenges to evaluation due to complex wording, inconsistent formatting and lack of existing assessment tools. Our resulting logic model contained 244 items. Standard B 5.1.1 separated into 24 items, the most for any single standard. A large proportion of items (40%) required evaluation of more than one input, process, output and/or outcome. Only one standard (B 3.2.2) was interpreted as requiring evaluation of a program outcome. Current WFME standards are difficult to use for evaluation planning. Our analysis may guide adaptation and revision of standards to make them more evaluable. Our logic model and data collection plan may be useful to medical schools planning an institutional self-review and to accrediting authorities wanting to provide guidance to schools under their purview.
DairyWise, a whole-farm dairy model.
Schils, R L M; de Haan, M H A; Hemmer, J G A; van den Pol-van Dasselaar, A; de Boer, J A; Evers, A G; Holshof, G; van Middelkoop, J C; Zom, R L G
2007-11-01
A whole-farm dairy model was developed and evaluated. The DairyWise model is an empirical model that simulated technical, environmental, and financial processes on a dairy farm. The central component is the FeedSupply model that balanced the herd requirements, as generated by the DairyHerd model, and the supply of homegrown feeds, as generated by the crop models for grassland and corn silage. The output of the FeedSupply model was used as input for several technical, environmental, and economic submodels. The submodels simulated a range of farm aspects such as nitrogen and phosphorus cycling, nitrate leaching, ammonia emissions, greenhouse gas emissions, energy use, and a financial farm budget. The final output was a farm plan describing all material and nutrient flows and the consequences on the environment and economy. Evaluation of DairyWise was performed with 2 data sets consisting of 29 dairy farms. The evaluation showed that DairyWise was able to simulate gross margin, concentrate intake, nitrogen surplus, nitrate concentration in ground water, and crop yields. The variance accounted for ranged from 37 to 84%, and the mean differences between modeled and observed values varied between -5 to +3% per set of farms. We conclude that DairyWise is a powerful tool for integrated scenario development and evaluation for scientists, policy makers, extension workers, teachers and farmers.
Farmer, Adrian H.; Cade, Brian S.; Terrell, James W.; Henriksen, Jim H.; Runge, Jeffery T.
2005-01-01
The primary objectives of this evaluation were to improve the performance of the Whooping Crane Habitat Suitability model (C4R) used by the U.S. Fish and Wildlife Service (Service) for defining the relationship between river discharge and habitat availability, and to assist the Service in implementing improved model(s) with existing hydraulic files. The C4R habitat model is applied at the scale of individual river cross-sections, but the model outputs are scaledup to larger reaches of the river using a decision support “model” comprised of other data and procedures. Hence, the validity of the habitat model depends at least partially on how its outputs are incorporated into this larger context. For that reason, we also evaluated other procedures including the PHABSIM data files, the FORTRAN computer programs used to implement the model, and other parameters used to simulate the relationship between river flows and the availability of Whooping Crane roosting habitat along more than 100 miles of heterogeneous river channels. An equally important objective of this report was to fully document these related procedures as well as the model and evaluation results so that interested parties could readily understand the technical basis for the Service’s recommendations.
DOT National Transportation Integrated Search
2016-04-20
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
In ecostoxicological testing, there are few studies that report on reproductive output (egg production) of marine or estuarine fish. Cunner (Tautogolabrus adspersus) were studied as a potential model species to evaluate the impact of pollutants with estrogenic activity on reprodu...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-06-30
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-06-16
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-06-16
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate theimpacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM)strategies. The outputs (mo...
Boerboom, L E; Kinney, T E; Olinger, G N; Hoffmann, R G
1993-10-01
Evaluation of patients with acute tricuspid insufficiency may include assessment of cardiac output by the thermodilution method. The accuracy of estimates of thermodilution-derived cardiac output in the presence of tricuspid insufficiency has been questioned. This study was designed to determine the validity of the thermodilution technique in a canine model of acute reversible tricuspid insufficiency. Cardiac output as measured by thermodilution and electromagnetic flowmeter was compared at two grades of regurgitation. The relationship between these two methods (thermodilution/electromagnetic) changed significantly from a regression slope of 1.01 +/- 0.18 (mean +/- standard deviation) during control conditions to a slope of 0.86 +/- 0.23 (p < 0.02) during severe regurgitation. No significant change was observed between control and mild regurgitation or between the initial control value and a control measurement repeated after tricuspid insufficiency was reversed at the termination of the study. This study shows that in a canine model of severe acute tricuspid regurgitation the thermodilution method underestimates cardiac output by an amount that is proportional to the level of cardiac output and to the grade of regurgitation.
Hostetler, S.W.; Giorgi, F.
1993-01-01
In this paper we investigate the feasibility of coupling regional climate models (RCMs) with landscape-scale hydrologic models (LSHMs) for studies of the effects of climate on hydrologic systems. The RCM used is the National Center for Atmospheric Research/Pennsylvania State University mesoscale model (MM4). Output from two year-round simulations (1983 and 1988) over the western United States is used to drive a lake model for Pyramid Lake in Nevada and a streamfiow model for Steamboat Creek in Oregon. Comparisons with observed data indicate that MM4 is able to produce meteorologic data sets that can be used to drive hydrologic models. Results from the lake model simulations indicate that the use of MM4 output produces reasonably good predictions of surface temperature and evaporation. Results from the streamflow simulations indicate that the use of MM4 output results in good simulations of the seasonal cycle of streamflow, but deficiencies in simulated wintertime precipitation resulted in underestimates of streamflow and soil moisture. Further work with climate (multiyear) simulations is necessary to achieve a complete analysis, but the results from this study indicate that coupling of LSHMs and RCMs may be a useful approach for evaluating the effects of climate change on hydrologic systems.
Team performance in the Italian NHS: the role of reflexivity.
Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico
2018-04-09
Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Fast-axial turbulent flow CO2 laser output characteristics and scaling parameters
NASA Astrophysics Data System (ADS)
Dembovetsky, V. V.; Zavalova, Valentina Y.; Zavalov, Yuri N.
1996-04-01
The paper presents the experimental results of evaluating the output characteristics of TLA- 600 carbon-dioxide laser with axial turbulent gas flow, as well as the results of numerical modeling. The output characteristic and spatial distribution of laser beam were measured with regard to specific energy input, working mixture pressure, active media length and output mirror reflection. The paper presents the results of experimental and theoretical study and design decisions on a succession of similar type industrial carbon-dioxide lasers with fast-axial gas-flow and dc discharge excitation of active medium developed at NICTL RAN. As an illustration, characteristics of the TLA-600 laser are cited.
Estimating the Fully Burdened Cost of Fuel Using an Input-Output Model - A Micro-Level Analysis
2011-09-01
The multilocation distribution model used by Lu and Rencheng to evaluate an international supply chain (From: Lu & Rencheng, 2007...IO model to evaluate an international supply chain specifically for a multilocation production system. Figure 2 illustrates such a system. vendor...vendor vendor Target markets Production plants Material vendor Figure 2. The multilocation distribution model used by Lu and Rencheng to
NASA Technical Reports Server (NTRS)
Griffin, Brian Joseph; Burken, John J.; Xargay, Enric
2010-01-01
This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.
An Evaluation of Output Quality of Machine Translation (Padideh Software vs. Google Translate)
ERIC Educational Resources Information Center
Azer, Haniyeh Sadeghi; Aghayi, Mohammad Bagher
2015-01-01
This study aims to evaluate the translation quality of two machine translation systems in translating six different text-types, from English to Persian. The evaluation was based on criteria proposed by Van Slype (1979). The proposed model for evaluation is a black-box type, comparative and adequacy-oriented evaluation. To conduct the evaluation, a…
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
USDA-ARS?s Scientific Manuscript database
The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
Towards improved and more routine Earth system model evaluation in CMIP
Eyring, Veronika; Gleckler, Peter J.; Heinze, Christoph; ...
2016-11-01
The Coupled Model Intercomparison Project (CMIP) has successfully provided the climate community with a rich collection of simulation output from Earth system models (ESMs) that can be used to understand past climate changes and make projections and uncertainty estimates of the future. Confidence in ESMs can be gained because the models are based on physical principles and reproduce many important aspects of observed climate. More research is required to identify the processes that are most responsible for systematic biases and the magnitude and uncertainty of future projections so that more relevant performance tests can be developed. At the same time,more » there are many aspects of ESM evaluation that are well established and considered an essential part of systematic evaluation but have been implemented ad hoc with little community coordination. Given the diversity and complexity of ESM analysis, we argue that the CMIP community has reached a critical juncture at which many baseline aspects of model evaluation need to be performed much more efficiently and consistently. We provide a perspective and viewpoint on how a more systematic, open, and rapid performance assessment of the large and diverse number of models that will participate in current and future phases of CMIP can be achieved, and announce our intention to implement such a system for CMIP6. Accomplishing this could also free up valuable resources as many scientists are frequently "re-inventing the wheel" by re-writing analysis routines for well-established analysis methods. A more systematic approach for the community would be to develop and apply evaluation tools that are based on the latest scientific knowledge and observational reference, are well suited for routine use, and provide a wide range of diagnostics and performance metrics that comprehensively characterize model behaviour as soon as the output is published to the Earth System Grid Federation (ESGF). The CMIP infrastructure enforces data standards and conventions for model output and documentation accessible via the ESGF, additionally publishing observations (obs4MIPs) and reanalyses (ana4MIPs) for model intercomparison projects using the same data structure and organization as the ESM output. This largely facilitates routine evaluation of the ESMs, but to be able to process the data automatically alongside the ESGF, the infrastructure needs to be extended with processing capabilities at the ESGF data nodes where the evaluation tools can be executed on a routine basis. Efforts are already underway to develop community-based evaluation tools, and we encourage experts to provide additional diagnostic codes that would enhance this capability for CMIP. And, at the same time, we encourage the community to contribute observations and reanalyses for model evaluation to the obs4MIPs and ana4MIPs archives. The intention is to produce through the ESGF a widely accepted quasi-operational evaluation framework for CMIP6 that would routinely execute a series of standardized evaluation tasks. Over time, as this capability matures, we expect to produce an increasingly systematic characterization of models which, compared with early phases of CMIP, will more quickly and openly identify the strengths and weaknesses of the simulations. This will also reveal whether long-standing model errors remain evident in newer models and will assist modelling groups in improving their models. Finally, this framework will be designed to readily incorporate updates, including new observations and additional diagnostics and metrics as they become available from the research community.« less
Probabilistic Evaluation of Competing Climate Models
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Chatterjee, S.; Heyman, M.; Cressie, N.
2017-12-01
A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. Here, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. We compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set, as an illustration.
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.
1982-01-01
The computer code for calculating web temperature distribution was expanded to provide a graphics output in addition to numerical and punch card output. The new code was used to examine various modifications of the J419 configuration and, on the basis of the results, a new growth geometry was designed. Additionally, several mathematically defined temperature profiles were evaluated for the effects of the free boundary (growth front) on the thermal stress generation. Experimental growth runs were made with modified J419 configurations to complement the modeling work. A modified J435 configuration was evaluated.
To publish or not to publish? On the aggregation and drivers of research performance
De Witte, Kristof
2010-01-01
This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573
DOT National Transportation Integrated Search
2016-08-22
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2016-10-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
DOT National Transportation Integrated Search
2017-08-01
The primary objective of AMS Testbed project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. Throug...
DOT National Transportation Integrated Search
2016-06-29
The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
Life cycle assessment modelling of waste-to-energy incineration in Spain and Portugal.
Margallo, M; Aldaco, R; Irabien, A; Carrillo, V; Fischer, M; Bala, A; Fullana, P
2014-06-01
In recent years, waste management systems have been evaluated using a life cycle assessment (LCA) approach. A main shortcoming of prior studies was the focus on a mixture of waste with different characteristics. The estimation of emissions and consumptions associated with each waste fraction in these studies presented allocation problems. Waste-to-energy (WTE) incineration is a clear example in which municipal solid waste (MSW), comprising many types of materials, is processed to produce several outputs. This paper investigates an approach to better understand incineration processes in Spain and Portugal by applying a multi-input/output allocation model. The application of this model enabled predictions of WTE inputs and outputs, including the consumption of ancillary materials and combustibles, air emissions, solid wastes, and the energy produced during the combustion of each waste fraction. © The Author(s) 2014.
Improved system integration for integrated gasification combined cycle (IGCC) systems.
Frey, H Christopher; Zhu, Yunhua
2006-03-01
Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.
COMPARISON OF SPATIAL PATTERNS OF POLLUTANT DISTRIBUTION WITH CMAQ PREDICTIONS
To evaluate the Models-3/Community Multiscale Air Quality (CMAQ) modeling system in reproducing the spatial patterns of aerosol concentrations over the country on timescales of months and years, the spatial patterns of model output are compared with those derived from observation...
Walter, Alexander I; Helgenberger, Sebastian; Wiek, Arnim; Scholz, Roland W
2007-11-01
Most Transdisciplinary Research (TdR) projects combine scientific research with the building of decision making capacity for the involved stakeholders. These projects usually deal with complex, societally relevant, real-world problems. This paper focuses on TdR projects, which integrate the knowledge of researchers and stakeholders in a collaborative transdisciplinary process through structured methods of mutual learning. Previous research on the evaluation of TdR has insufficiently explored the intended effects of transdisciplinary processes on the real world (societal effects). We developed an evaluation framework for assessing the societal effects of transdisciplinary processes. Outputs (measured as procedural and product-related involvement of the stakeholders), impacts (intermediate effects connecting outputs and outcomes) and outcomes (enhanced decision making capacity) are distinguished as three types of societal effects. Our model links outputs and outcomes of transdisciplinary processes via the impacts using a mediating variables approach. We applied this model in an ex post evaluation of a transdisciplinary process. 84 out of 188 agents participated in a survey. The results show significant mediation effects of the two impacts "network building" and "transformation knowledge". These results indicate an influence of a transdisciplinary process on the decision making capacity of stakeholders, especially through social network building and the generation of knowledge relevant for action.
ERIC Educational Resources Information Center
Wiley, Caroline H.; Good, Thomas L.; McCaslin, Mary
2008-01-01
Background/Context: The achievement effects of Comprehensive School Reform (CSR) programs have been studied through the use of input-output models, in which type of CSR program is the input and student achievement is the output. Although specific programs have been found to be more effective and evaluated more than others, teaching practices in…
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
The link provided access to all the datasets and metadata used in this manuscript for the model development and evaluation per Geoscientific Model Development's publication guidelines with the exception of the model output due to its size. This dataset is associated with the following publication:Bash , J., K. Baker , and M. Beaver. Evaluation of improved land use and canopy representation in BEIS v3.61 with biogenic VOC measurements in California. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 9: 2191-2207, (2016).
Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki
2017-03-01
In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.
Chan, Caroline; Heinbokel, John F; Myers, John A; Jacobs, Robert R
2012-10-01
A complex interplay of factors determines the degree of bioaccumulation of Hg in fish in any particular basin. Although certain watershed characteristics have been associated with higher or lower bioaccumulation rates, the relationships between these characteristics are poorly understood. To add to this understanding, a dynamic model was built to examine these relationships in stream systems. The model follows Hg from the water column, through microbial conversion and subsequent concentration, through the food web to piscivorous fish. The model was calibrated to 7 basins in Kentucky and further evaluated by comparing output to 7 sites in, or proximal to, the Ohio River Valley, an underrepresented region in the bioaccumulation literature. Water quality and basin characteristics were inputs into the model, with tissue concentrations of Hg of generic trophic level 3, 3.5, and 4 fish the output. Regulatory and monitoring data were used to calibrate and evaluate the model. Mean average prediction error for Kentucky sites was 26%, whereas mean error for evaluation sites was 51%. Variability within natural systems can be substantial and was quantified for fish tissue by analysis of the US Geological Survey National Fish Database. This analysis pointed to the need for more systematic sampling of fish tissue. Analysis of model output indicated that parameters that had the greatest impact on bioaccumulation influenced the system at several points. These parameters included forested and wetlands coverage and nutrient levels. Factors that were less sensitive modified the system at only 1 point and included the unfiltered total Hg input and the portion of the basin that is developed. Copyright © 2012 SETAC.
Performance and Simulation of a Stand-alone Parabolic Trough Solar Thermal Power Plant
NASA Astrophysics Data System (ADS)
Mohammad, S. T.; Al-Kayiem, H. H.; Assadi, M. K.; Gilani, S. I. U. H.; Khlief, A. K.
2018-05-01
In this paper, a Simulink® Thermolib Model has been established for simulation performance evaluation of Stand-alone Parabolic Trough Solar Thermal Power Plant in Universiti Teknologi PETRONAS, Malaysia. This paper proposes a design of 1.2 kW parabolic trough power plant. The model is capable to predict temperatures at any system outlet in the plant, as well as the power output produced. The conditions that are taken into account as input to the model are: local solar radiation and ambient temperatures, which have been measured during the year. Other parameters that have been input to the model are the collector’s sizes, location in terms of latitude and altitude. Lastly, the results are presented in graphical manner to describe the analysed variations of various outputs of the solar fields obtained, and help to predict the performance of the plant. The developed model allows an initial evaluation of the viability and technical feasibility of any similar solar thermal power plant.
NASA Astrophysics Data System (ADS)
Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari
2015-03-01
Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.
CMAQ-UCD (formerly known as CMAQ-AIM), is a fully dynamic, sectional aerosol model which has been coupled to the Community Multiscale Air Quality (CMAQ) host air quality model. Aerosol sulfate, nitrate, ammonium, sodium, and chloride model outputs are compared against MOUDI data...
Theoretical evaluation of a continues-wave Ho3+:BaY2F8 laser with mid-infrared emission
NASA Astrophysics Data System (ADS)
Rong, Kepeng; Cai, He; An, Guofei; Han, Juhong; Yu, Hang; Wang, Shunyan; Yu, Qiang; Wu, Peng; Zhang, Wei; Wang, Hongyuan; Wang, You
2018-01-01
In this paper, we build a theoretical model to study a continues-wave (CW) Ho3+:BaY2F8 laser by considering both energy transfer up-conversion (ETU) and cross relaxation (CR) processes. The influences of the pump power, reflectance of an output coupler (OC), and crystal length on the output features are systematically analyzed for an end-pumped configuration, respectively. We also investigate how the processes of ETU and CR in the energy-level system affect the output of a Ho3+:BaY2F8 laser by use of the kinetic evaluation. The simulation results show that the optical-to-optical efficiency can be promoted by adjusting the parameters such as the reflectance of an output coupler, crystal length, and pump power. It has been theoretically demonstrated that the threshold of a Ho3+:BaY2F8 laser is very high for the lasing operation in a CW mode.
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.
2015-03-01
Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for nearly three decades as analytical tools in natural resource management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of fu...
Yarkoni, Tal
2012-01-01
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (1) open and transparent access to accumulated evaluation data, (2) personalized and highly customizable performance metrics, and (3) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting toward such models as soon as possible. PMID:23060783
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
A Method for Evaluation of Model-Generated Vertical Profiles of Meteorological Variables
2016-03-01
3 2.1 RAOB Soundings and WRF Output for Profile Generation 3 2.2 Height-Based Profiles 5 2.3 Pressure-Based Profiles 5 3. Comparisons 8 4...downward arrow. The blue lines represent sublayers with sublayer means indicated by red triangles. Circles indicate the observations or WRF output...9 Table 3 Sample of differences in listed variables derived from WRF and RAOB data
Public–nonprofit partnership performance in a disaster context: the case of Haiti.
Nolte, Isabella M; Boenigk, Silke
2011-01-01
During disasters, partnerships between public and nonprofit organizations are vital to provide fast relief to affected communities. In this article, we develop a process model to support a performance evaluation of such intersectoral partnerships. The model includes input factors, organizational structures, outputs and the long-term outcomes of public–nonprofit partnerships. These factors derive from theory and a systematic literature review of emergency, public, nonprofit, and network research. To adapt the model to a disaster context, we conducted a case study that examines public and nonprofit organizations that partnered during the 2010 Haiti earthquake. The case study results show that communication, trust, and experience are the most important partnership inputs; the most prevalent governance structure of public–nonprofit partnerships is a lead organization network. Time and quality measures should be considered to assess partnership outputs, and community, network, and organizational actor perspectives must be taken into account when evaluating partnership outcomes.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Preliminary results and assessment of the MAR outputs over High Mountain Asia
NASA Astrophysics Data System (ADS)
Linares, M.; Tedesco, M.; Margulis, S. A.; Cortés, G.; Fettweis, X.
2017-12-01
Lack of ground measurements has made the use of regional climate models (RCMs) over the High Mountain Asia (HMA) pivotal for understanding the impact of climate change on the hydrological cycle and on the cryosphere. Here, we show an analysis of the assessment of the outputs of Modèle Atmosphérique Régionale (MAR) model RCM over the HMA region as part of the NASA-funded project `Understanding and forecasting changes in High Mountain Asia snow hydrology via a novel Bayesian reanalysis and modeling approach'. The first step was to evaluate the impact of the different forcings on MAR outputs. To this aim, we performed simulations for the 2007 - 2008 and 2014 - 2015 years forcing MAR at its boundaries either with reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF) or from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2). The comparison between the outputs obtained with the two forcings indicates that the impact on MAR simulations depends on specific parameters. For example, in case of surface pressure the maximum percentage error is 0.09 % while the 2-m air temperature has a maximum percentage error of 103.7%. Next, we compared the MAR outputs with reanalysis data fields over the region of interest. In particular, we evaluated the following parameters: surface pressure, snow depth, total cloud cover, two meter temperature, horizontal wind speed, vertical wind speed, wind speed, surface new solar radiation, skin temperature, surface sensible heat flux, and surface latent heat flux. Lastly, we report results concerning the assessment of MAR surface albedo and surface temperature over the region through MODIS remote sensing products. Next steps are to determine whether RCMs and reanalysis datasets are effective at capturing snow and snowmelt runoff processes in the HMA region through a comparison with in situ datasets. This will help determine what refinements are necessary to improve RCM outputs.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Neural network uncertainty assessment using Bayesian statistics: a remote sensing application
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.
2004-01-01
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.
ERIC Educational Resources Information Center
McCall, James P.
2011-01-01
The evaluation, improvement, and accountability of teachers has been the topic of the nation throughout the era of No Child Left Behind. Where some critics point to a business model of measuring outputs (i.e., student achievement scores on standardized tests) to evaluate teacher performance, others will advocate for a fair evaluation system that…
ERIC Educational Resources Information Center
Neuman, Yrsa; Laakso, Mikael
2017-01-01
Introduction: Open access, the notion that research output, such as journal articles, should be freely accessible to readers on the Web, is arguably in the best interest of science. In this article, we (1) describe in-depth how a society-owned philosophy journal, "Nordic Wittgenstein Review," evaluated various publishing models and made…
Analytical solutions to trade-offs between size of protected areas and land-use intensity.
Butsic, Van; Radeloff, Volker C; Kuemmerle, Tobias; Pidgeon, Anna M
2012-10-01
Land-use change is affecting Earth's capacity to support both wild species and a growing human population. The question is how best to manage landscapes for both species conservation and economic output. If large areas are protected to conserve species richness, then the unprotected areas must be used more intensively. Likewise, low-intensity use leaves less area protected but may allow wild species to persist in areas that are used for market purposes. This dilemma is present in policy debates on agriculture, housing, and forestry. Our goal was to develop a theoretical model to evaluate which land-use strategy maximizes economic output while maintaining species richness. Our theoretical model extends previous analytical models by allowing land-use intensity on unprotected land to influence species richness in protected areas. We devised general models in which species richness (with modified species-area curves) and economic output (a Cobb-Douglas production function) are a function of land-use intensity and the proportion of land protected. Economic output increased as land-use intensity and extent increased, and species richness responded to increased intensity either negatively or following the intermediate disturbance hypothesis. We solved the model analytically to identify the combination of land-use intensity and protected area that provided the maximum amount of economic output, given a target level of species richness. The land-use strategy that maximized economic output while maintaining species richness depended jointly on the response of species richness to land-use intensity and protection and the effect of land use outside protected areas on species richness within protected areas. Regardless of the land-use strategy, species richness tended to respond to changing land-use intensity and extent in a highly nonlinear fashion. ©2012 Society for Conservation Biology.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
ERIC Educational Resources Information Center
Weissman, Evan; O'Connell, Jesse
2016-01-01
"Aid Like A Paycheck" is a large-scale pilot evaluation of whether an innovative approach to disbursing financial aid can improve academic and financial outcomes for low-income community college students. Lessons from the pilot evaluation were used to create and fine-tune a logic model depicting activities, outputs, mediators, and…
Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.
2015-01-01
Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850
NASA Astrophysics Data System (ADS)
Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.
2015-12-01
A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.
Abeyta, Cynthia G.; Frenzel, Peter F.
1999-01-01
This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Impact of Parameter Uncertainty Assessment of Critical SWAT Output Simulations
USDA-ARS?s Scientific Manuscript database
Watershed models are increasingly being utilized to evaluate alternate management scenarios for improving water quality. The concern for using these tools in extensive programs such as the National Total Maximum Daily Load (TMDL) program is that the certainty of model results and efficacy of managem...
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
2018-02-01
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.
Stone, Vathsala I; Lane, Joseph P
2012-05-16
Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
2012-01-01
Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638
NASA Astrophysics Data System (ADS)
Irimoto, Hiroshi; Shibusawa, Hiroyuki; Miyata, Yuzuru
2017-10-01
Damage to transportation networks as a result of natural disasters can lead to economic losses due to lost trade along those links in addition to the costs of damage to the infrastructure itself. This study evaluates the economic damages of transport disruptions such as highways, tunnels, bridges, and ports using a transnational and interregional Input-Output Model that divides the world into 23 regions: 9 regions in Japan, 7 regions in China, and 4 regions in Korea, Taiwan, ASEAN5, and the USA to allow us to focus on Japan's regional and international links. In our simulation, economic ripple effects of both international and interregional transport disruptions are measured by changes in the trade coefficients in the input-output model. The simulation showed that, in the case of regional links in Japan, a transport disruption in the Kanmon Straits causes the most damage to our targeted world, resulting in economic damage of approximately 36.3 billion. In the case of international links among Japan, China, and Korea, damage to the link between Kanto in Japan and Huabei in China causes economic losses of approximately 31.1 billion. Our result highlights the importance of disaster prevention in the Kanmon Straits, Kanto, and Huabei to help ensure economic resilience.
Multiregional input-output model for the evaluation of Spanish water flows.
Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio
2013-01-01
We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
NASA Astrophysics Data System (ADS)
Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can
2017-09-01
A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.
Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can
2017-09-01
A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
Evaluation of the performance of a passive-active vibration isolation system
NASA Astrophysics Data System (ADS)
Sun, L. L.; Hansen, C. H.; Doolan, C.
2015-01-01
The behavior of a feedforward active isolation system subjected to actuator output constraints is investigated. Distributed parameter models are developed to analyze the system response, and to produce a transfer matrix for the design of an integrated passive-active isolation system. Cost functions considered here comprise a combination of the vibration transmission energy and the sum of the squared control forces. The example system considered is a rigid body connected to a simply supported plate via two isolation mounts. The overall isolation performance is evaluated by numerical simulation. The results show that the control strategies which rely on unconstrained actuator outputs may give substantial power transmission reductions over a wide frequency range, but also require large control force amplitudes to control excited vibration modes of the system. Expected power transmission reductions for modified control strategies that incorporate constrained actuator outputs are considerably less than typical reductions with unconstrained actuator outputs. The active system with constrained control force outputs is shown to be more effective at the resonance frequencies of the supporting plate. However, in the frequency range in which rigid body modes are present, the control strategies employed using constrained actuator outputs can only achieve 5-10 dB power transmission reduction, while at off-resonance frequencies, little or no power transmission reduction can be obtained with realistic control forces. Analysis of the wave effects in the passive mounts is also presented.
Surrogate modeling of deformable joint contact using artificial neural networks.
Eskinazi, Ilan; Fregly, Benjamin J
2015-09-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks
Eskinazi, Ilan; Fregly, Benjamin J.
2016-01-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591
Modeling and control of non-square MIMO system using relay feedback.
Kalpana, D; Thyagarajan, T; Gokulraj, N
2015-11-01
This paper proposes a systematic approach for the modeling and control of non-square MIMO systems in time domain using relay feedback. Conventionally, modeling, selection of the control configuration and controller design of non-square MIMO systems are performed using input/output information of direct loop, while the output of undesired responses that bears valuable information on interaction among the loops are not considered. However, in this paper, the undesired response obtained from relay feedback test is also taken into consideration to extract the information about the interaction between the loops. The studies are performed on an Air Path Scheme of Turbocharged Diesel Engine (APSTDE) model, which is a typical non-square MIMO system, with input and output variables being 3 and 2 respectively. From the relay test response, the generalized analytical expressions are derived and these analytical expressions are used to estimate unknown system parameters and also to evaluate interaction measures. The interaction is analyzed by using Block Relative Gain (BRG) method. The model thus identified is later used to design appropriate controller to carry out closed loop studies. Closed loop simulation studies were performed for both servo and regulatory operations. Integral of Squared Error (ISE) performance criterion is employed to quantitatively evaluate performance of the proposed scheme. The usefulness of the proposed method is demonstrated on a lab-scale Two-Tank Cylindrical Interacting System (TTCIS), which is configured as a non-square system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Models are often used to quantify how land use change and management impact soil organic carbon (SOC) stocks because it is often not feasible to use direct measuring methods. Because models are simplifications of reality, it is essential to compare model outputs with measured values to evaluate mode...
Evaluation of Data Used for Modelling the Stratosphere of Saturn
NASA Astrophysics Data System (ADS)
Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.
2015-11-01
Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct and temperature-appropriate photochemical and kinetic data for the atmosphere under examination is emphasised as a consequence.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
NASA Astrophysics Data System (ADS)
Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.
2016-12-01
There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.
FEES: design of a Fire Economics Evaluation System
Thomas J. Mills; Frederick W. Bratten
1982-01-01
The Fire Economics Evaluation System (FEES)--a simulation model--is being designed for long-term planning application by all public agencies with wildland fire management responsibilities. A fully operational version of FEES will be capable of estimating the economic efficiency, fire-induced changes in resource outputs, and risk characteristics of a range of fire...
NASA Astrophysics Data System (ADS)
Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin
2017-08-01
It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.
SunLine Expands Horizons with Fuel Cell Bus Demo
DOT National Transportation Integrated Search
2006-05-01
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
Solid rocket booster performance evaluation model. Volume 1: Engineering description
NASA Technical Reports Server (NTRS)
1974-01-01
The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.
Control design methods for floating wind turbines for optimal disturbance rejection
NASA Astrophysics Data System (ADS)
Lemmer, Frank; Schlipf, David; Cheng, Po Wen
2016-09-01
An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.
Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E
2017-08-01
Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society for Leukocyte Biology.
Jordan, D; McEwen, S A; Lammerding, A M; McNab, W B; Wilson, J B
1999-06-29
A Monte Carlo simulation model was constructed for assessing the quantity of microbial hazards deposited on cattle carcasses under different pre-slaughter management regimens. The model permits comparison of industry-wide and abattoir-based mitigation strategies and is suitable for studying pathogens such as Escherichia coli O157:H7 and Salmonella spp. Simulations are based on a hierarchical model structure that mimics important aspects of the cattle population prior to slaughter. Stochastic inputs were included so that uncertainty about important input assumptions (such as prevalence of a human pathogen in the live cattle-population) would be reflected in model output. Control options were built into the model to assess the benefit of having prior knowledge of animal or herd-of-origin pathogen status (obtained from the use of a diagnostic test). Similarly, a facility was included for assessing the benefit of re-ordering the slaughter sequence based on the extent of external faecal contamination. Model outputs were designed to evaluate the performance of an abattoir in a 1-day period and included outcomes such as the proportion of carcasses contaminated with a pathogen, the daily mean and selected percentiles of pathogen counts per carcass, and the position of the first infected animal in the slaughter run. A measure of the time rate of introduction of pathogen into the abattoir was provided by assessing the median, 5th percentile, and 95th percentile cumulative pathogen counts at 10 equidistant points within the slaughter run. Outputs can be graphically displayed as frequency distributions, probability densities, cumulative distributions or x-y plots. The model shows promise as an inexpensive method for evaluating pathogen control strategies such as those forming part of a Hazard Analysis and Critical Control Point (HACCP) system.
Evaluation of MM5 model resolution when applied to prediction of national fire danger rating indexes
Jeanne L. Hoadley; Miriam L. Rorig; Larry Bradshaw; Sue A. Ferguson; Kenneth J. Westrick; Scott L. Goodrick; Paul Werth
2006-01-01
Weather predictions from the MM5 mesoscale model were used to compute gridded predictions of National Fire Danger Rating System (NFDRS) indexes. The model output was applied to a case study of the 2000 fire season in Northern Idaho and Western Montana to simulate an extreme event. To determine the preferred resolution for automating NFD RS predictions, model...
Gain compression and its dependence on output power in quantum dot lasers
NASA Astrophysics Data System (ADS)
Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.
2013-06-01
The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.
Erin K. Noonan-Wright; Nicole M. Vaillant; Alicia L. Reiner
2014-01-01
Fuel treatment effectiveness is often evaluated with fire behavior modeling systems that use fuel models to generate fire behavior outputs. How surface fuels are assigned, either using one of the 53 stylized fuel models or developing custom fuel models, can affect predicted fire behavior. We collected surface and canopy fuels data before and 1, 2, 5, and 8 years after...
NASA Astrophysics Data System (ADS)
Hancock, G. R.; Webb, A. A.; Turner, L.
2017-11-01
Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.
NASA Astrophysics Data System (ADS)
Gallego, C.; Costa, A.; Cuerva, A.
2010-09-01
Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single-ANN model (without regime classification) is adopted as a reference model. Both models are evaluated in terms of Improvement over Persistence on the Mean Square Error basis (IoP%) when predicting horizons form 1 time-step to 5. The case of a wind farm located in the complex terrain of Alaiz (north of Spain) has been considered. Three years of available power output data with a hourly resolution have been employed: two years for training and validation of the model and the last year for assessing the accuracy. Results showed that the RS-ANN overcame the single-ANN model for one step-ahead forecasts: the overall IoP% was up to 8.66% for the RS-ANN model (depending on the gradient criterion selected to consider the ramp regime triggered) and 6.16% for the single-ANN. However, both models showed similar accuracy for larger horizons. A locally-weighted evaluation during ramp events for one-step ahead was also performed. It was found that the IoP% during ramps-up increased from 17.60% (case of single-ANN) to 22.25% (case of RS-ANN); however, during the ramps-down events this improvement increased from 18.55% to 19.55%. Three main conclusions are derived from this case study: It highlights the importance of considering statistical models capable of differentiate several regimes showed by the output power time series in order to improve the forecasting during extreme events like ramps. On-line regime classification based on available power output data didn't seem to contribute to improve forecasts for horizons beyond one-step ahead. Tacking into account other explanatory variables (local wind measurements, NWP outputs) could lead to a better understanding of ramp events, improving the regime assessment also for further horizons. The RS-ANN model slightly overcame the single-ANN during ramp-down events. If further research reinforce this effect, special attention should be addressed to understand the underlying processes during ramp-down events.
The user's guide to STEMS (Stand and Tree Evaluation and Modeling System).
David M. Belcher
1981-01-01
Presents the structure of STEMS, a computer program for projecting growth of individual trees within the Lake States Region, and discusses its input, processing, major subsystems, and output. Includes an example projection.
Evaluating Economic Impacts of Expanded Global Wood Energy Consumption with the USFPM/GFPM Model
Peter J. Ince; Andrew Kramp; Kenneth E. Skog
2012-01-01
A U.S. forest sector market module was developed within the general Global Forest Products Model. The U.S. module tracks regional timber markets, timber harvests by species group, and timber product outputs in greater detail than does the global model. This hybrid approach provides detailed regional market analysis for the United States although retaining the...
A Digital Computer Simulation of Cardiovascular and Renal Physiology.
ERIC Educational Resources Information Center
Tidball, Charles S.
1979-01-01
Presents the physiological MACPEE, one of a family of digital computer simulations used in Canada and Great Britain. A general description of the model is provided, along with a sample of computer output format, options for making interventions, advanced capabilities, an evaluation, and technical information for running a MAC model. (MA)
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Skugareva, O A; Kaplan, M A; Malygina, A I; Mikhailovskaya, A A
2009-11-01
Antitumor efficiency of interstitial photodynamic therapy was evaluated in experiments on outbred albino rats with implanted M-1 sarcoma. Interstitial photodynamic therapy was carried out using one diffusor at different output power and duration of exposure. The percentage of complete regression of the tumors increased with increasing exposure parameters.
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
2018-02-27
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less
Projecting climate change impacts on hydrology: the potential role of daily GCM output
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Hidalgo, H. G.; Das, T.; Dettinger, M. D.; Cayan, D.
2008-12-01
A primary challenge facing resource managers in accommodating climate change is determining the range and uncertainty in regional and local climate projections. This is especially important for assessing changes in extreme events, which will drive many of the more severe impacts of a changed climate. Since global climate models (GCMs) produce output at a spatial scale incompatible with local impact assessment, different techniques have evolved to downscale GCM output so locally important climate features are expressed in the projections. We compared skill and hydrologic projections using two statistical downscaling methods and a distributed hydrology model. The downscaling methods are the constructed analogues (CA) and the bias correction and spatial downscaling (BCSD). CA uses daily GCM output, and can thus capture GCM projections for changing extreme event occurrence, while BCSD uses monthly output and statistically generates historical daily sequences. We evaluate the hydrologic impacts projected using downscaled climate (from the NCEP/NCAR reanalysis as a surrogate GCM) for the late 20th century with both methods, comparing skill in projecting soil moisture, snow pack, and streamflow at key locations in the Western United States. We include an assessment of a new method for correcting for GCM biases in a hybrid method combining the most important characteristics of both methods.
Evaluation of coral reef carbonate production models at a global scale
NASA Astrophysics Data System (ADS)
Jones, N. S.; Ridgwell, A.; Hendy, E. J.
2014-09-01
Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle. It is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are essential for understanding how carbonate deposition responds to environmental conditions including future atmospheric CO2 concentrations, but these models must first be evaluated in terms of their skill in recreating present day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales, by comparing carbonate budget outputs with independent estimates. We also compile available global data on reef calcification to produce an observation-based dataset for the model evaluation. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically-developed global framework, the Global Reef Accretion Model (GRAM). None of the four models correlated with independent rate estimates of whole reef calcification. The temperature-only based approach was the only model output to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modeling approach, accounting for population dynamics in terms of mortality and recruitment and hence coral cover, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.
Precipitation-runoff modeling system; user's manual
Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.
1983-01-01
The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)
Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen
2016-02-22
A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
Input-Output Modeling and Control of the Departure Process of Congested Airports
NASA Technical Reports Server (NTRS)
Pujet, Nicolas; Delcaire, Bertrand; Feron, Eric
2003-01-01
A simple queueing model of busy airport departure operations is proposed. This model is calibrated and validated using available runway configuration and traffic data. The model is then used to evaluate preliminary control schemes aimed at alleviating departure traffic congestion on the airport surface. The potential impact of these control strategies on direct operating costs, environmental costs and overall delay is quantified and discussed.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feijoo, M.; Mestre, F.; Castagnaro, A.
This study evaluates the potential effect of climate change on Dry-bean production in Argentina, combining climate models, a crop productivity model and a yield response model estimation of climate variables on crop yields. The study was carried out in the North agricultural regions of Jujuy, Salta, Santiago del Estero and Tucuman which include the largest areas of Argentina where dry beans are grown as a high input crop. The paper combines the output from a crop model with different techniques of analysis. The scenarios used in this study were generated from the output of two General Circulation Models (GCMs): themore » Goddard Institute for Space Studies model (GISS) and the Canadian Climate Change Model (CCCM). The study also includes a preliminary evaluation of the potential changes in monetary returns taking into account the possible variability of yields and prices, using mean-Gini stochastic dominance (MGSD). The results suggest that large climate change may have a negative impact on the Argentine agriculture sector, due to the high relevance of this product in the export sector. The difference negative effect depends on the varieties of dry bean and also the General Circulation Model scenarios considered for double levels of atmospheric carbon dioxide.« less
The NBS Energy Model Assessment project: Summary and overview
NASA Astrophysics Data System (ADS)
Gass, S. I.; Hoffman, K. L.; Jackson, R. H. F.; Joel, L. S.; Saunders, P. B.
1980-09-01
The activities and technical reports for the project are summarized. The reports cover: assessment of the documentation of Midterm Oil and Gas Supply Modeling System; analysis of the model methodology characteristics of the input and other supporting data; statistical procedures undergirding construction of the model and sensitivity of the outputs to variations in input, as well as guidelines and recommendations for the role of these in model building and developing procedures for their evaluation.
General Circulation Model Output for Forest Climate Change Research and Applications
Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi
1993-01-01
This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.
Tiwari, Vikram; Kumar, Avinash B
2018-01-01
The current system of summative multi-rater evaluations and standardized tests to determine readiness to graduate from critical care fellowships has limitations. We sought to pilot the use of data envelopment analysis (DEA) to assess what aspects of the fellowship program contribute the most to an individual fellow's success. DEA is a nonparametric, operations research technique that uses linear programming to determine the technical efficiency of an entity based on its relative usage of resources in producing the outcome. Retrospective cohort study. Critical care fellows (n = 15) in an Accreditation Council for Graduate Medical Education (ACGME) accredited fellowship at a major academic medical center in the United States. After obtaining institutional review board approval for this retrospective study, we analyzed the data of 15 anesthesiology critical care fellows from academic years 2013-2015. The input-oriented DEA model develops a composite score for each fellow based on multiple inputs and outputs. The inputs included the didactic sessions attended, the ratio of clinical duty works hours to the procedures performed (work intensity index), and the outputs were the Multidisciplinary Critical Care Knowledge Assessment Program (MCCKAP) score and summative evaluations of fellows. A DEA efficiency score that ranged from 0 to 1 was generated for each of the fellows. Five fellows were rated as DEA efficient, and 10 fellows were characterized in the DEA inefficient group. The model was able to forecast the level of effort needed for each inefficient fellow, to achieve similar outputs as their best performing peers. The model also identified the work intensity index as the key element that characterized the best performers in our fellowship. DEA is a feasible method of objectively evaluating peer performance in a critical care fellowship beyond summative evaluations alone and can potentially be a powerful tool to guide individual performance during the fellowship.
Design and evaluation of excitation light source device for fluorescence endoscope
NASA Astrophysics Data System (ADS)
Lim, Hyun Soo
2009-06-01
This study aims at designing and evaluating light source devices that can stably generate light with various wavelengths in order to make possible PDD using a photosensitizer and diagnosis using auto-fluorescence. The light source was a Xenon lamp and filter wheel, composed of an optical output control through Iris and filters with several wavelength bands. It also makes the inducement of auto-fluorescence possible because it is designed to generate a wavelength band of 380-420nm, 430-480nm, and 480-560nm. The transmission part of the light source was developed to enhance the efficiency of light transmission. To evaluate this light source, the characteristics of light output and wavelength band were verified. To validate the capability of this device as PDD, the detection of auto-fluorescence using mouse models was performed.
NASA Astrophysics Data System (ADS)
Rodehacke, C. B.; Mottram, R.; Boberg, F.
2017-12-01
The Devon Ice Cap is an example of a relatively well monitored small ice cap in the Canadian Arctic. Close to Greenland, it shows a similar surface mass balance signal to glaciers in western Greenland. Here we various boundary conditions, ranging from ERA-Interim reanalysis data via global climate model high resolution (5km) output from the regional climate model HIRHAM5, to determine the surface mass balance of the Devon ice cap. These SMB estimates are used to drive the PISM glacier model in order to model the present day and future prospects of this small Arctic ice cap. Observational data from the Devon Ice Cap in Arctic Canada is used to evaluate the surface mass balance (SMB) data output from the HIRHAM5 model for simulations forced with the ERA-Interim climate reanalysis data and the historical emissions scenario run by the EC-Earth global climate model. The RCP8.5 scenario simulated by EC-Earth is also downscaled by HIRHAM5 and this output is used to force the PISM model to simulate the likely future evolution of the Devon Ice Cap under a warming climate. We find that the Devon Ice Cap is likely to continue its present day retreat, though in the future increased precipitation partly offsets the enhanced melt rates caused by climate change.
Future year emissions depend highly on economic, technological, societal and regulatory drivers. A scenario framework was adopted to analyze technology development pathways and changes in consumer preferences, and evaluate resulting emissions growth patterns while considering fut...
Forecasting timber, biomass, and tree carbon pools with the output of state and transition models
Xiaoping Zhou; Miles A. Hemstrom
2012-01-01
The Integrated Landscape Assessment Project (ILAP) uses spatial vegetation data and state and transition models (STM) to forecast future vegetation conditions and the interacting effects of natural disturbances and management activities. Results from ILAP will help land managers, planners, and policymakers evaluate management strategies that reduce fire risk, improve...
Speckle noise in satellite based lidar systems
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
The lidar system model was described, and the statistics of the signal and noise at the receiver output were derived. Scattering media effects were discussed along with polarization and atmospheric turbulence. The major equations were summarized and evaluated for some typical parameters.
DOT National Transportation Integrated Search
2011-12-20
The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
NASA Astrophysics Data System (ADS)
Zaherpour, Jamal; Gosling, Simon N.; Mount, Nick; Müller Schmied, Hannes; Veldkamp, Ted I. E.; Dankers, Rutger; Eisner, Stephanie; Gerten, Dieter; Gudmundsson, Lukas; Haddeland, Ingjerd; Hanasaki, Naota; Kim, Hyungjun; Leng, Guoyong; Liu, Junguo; Masaki, Yoshimitsu; Oki, Taikan; Pokhrel, Yadu; Satoh, Yusuke; Schewe, Jacob; Wada, Yoshihide
2018-06-01
Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models’ ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model—a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.
Shuttle cryogenic supply system optimization study. Volume 1: Management supply, sections 1 - 3
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of the cryogenic supply system for use on space shuttle vehicles was conducted. The major outputs of the analysis are: (1) evaluations of subsystem and integrated system concepts, (2) selection of representative designs, (3) parametric data and sensitivity studies, (4) evaluation of cryogenic cooling in environmental control subsystems, and (5) development of mathematical model.
Watershed scale response to climate change--Trout Lake Basin, Wisconsin
Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Trout River Basin at Trout Lake in northern Wisconsin.
Watershed scale response to climate change--Clear Creek Basin, Iowa
Christiansen, Daniel E.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Clear Creek Basin, near Coralville, Iowa.
Watershed scale response to climate change--Feather River Basin, California
Koczot, Kathryn M.; Markstrom, Steven L.; Hay, Lauren E.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Feather River Basin, California.
Watershed scale response to climate change--South Fork Flathead River Basin, Montana
Chase, Katherine J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the South Fork Flathead River Basin, Montana.
Watershed scale response to climate change--Cathance Stream Basin, Maine
Dudley, Robert W.; Hay, Lauren E.; Markstrom, Steven L.; Hodgkins, Glenn A.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Cathance Stream Basin, Maine.
Watershed scale response to climate change--Pomperaug River Watershed, Connecticut
Bjerklie, David M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Pomperaug River Basin at Southbury, Connecticut.
Watershed scale response to climate change--Starkweather Coulee Basin, North Dakota
Vining, Kevin C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Starkweather Coulee Basin near Webster, North Dakota.
Watershed scale response to climate change--Sagehen Creek Basin, California
Markstrom, Steven L.; Hay, Lauren E.; Regan, R. Steven
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sagehen Creek Basin near Truckee, California.
Watershed scale response to climate change--Sprague River Basin, Oregon
Risley, John; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sprague River Basin near Chiloquin, Oregon.
Watershed scale response to climate change--Black Earth Creek Basin, Wisconsin
Hunt, Randall J.; Walker, John F.; Westenbroek, Steven M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Black Earth Creek Basin, Wisconsin.
Watershed scale response to climate change--East River Basin, Colorado
Battaglin, William A.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the East River Basin, Colorado.
Watershed scale response to climate change--Naches River Basin, Washington
Mastin, Mark C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Naches River Basin below Tieton River in Washington.
Watershed scale response to climate change--Flint River Basin, Georgia
Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Flint River Basin at Montezuma, Georgia.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)
Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.
2015-01-01
These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.
Simulated Performance of the Wisconsin Superconducting Electron Gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.A. Bosch, K.J. Kleman, R.A. Legg
2012-07-01
The Wisconsin superconducting electron gun is modeled with multiparticle tracking simulations using the ASTRA and GPT codes. To specify the construction of the emittance-compensation solenoid, we studied the dependence of the output bunch's emittance upon the solenoid's strength and field errors. We also evaluated the dependence of the output bunch's emittance upon the bunch's initial emittance and the size of the laser spot on the photocathode. The results suggest that a 200-pC bunch with an emittance of about one mm-mrad can be produced for a free-electron laser.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobias, Benjamin John
A series approximation has been derived for the transport of optical photons within a cylindrically symmetric light pipe and applied to the task of evaluating both the origin and angular distribution of light reaching the output plane. This analytic expression finds particular utility in first-pass photonic design applications since it may be evaluated at a very modest computational cost and is readily parameterized for relevant design constraints. It has been applied toward quantitative exploration of various scintillation crystal preparations and their impact on both quantum efficiency and noise, reproducing sensible dependencies and providing physical justification for certain gamma ray cameramore » design choices.« less
Towards systematic evaluation of crop model outputs for global land-use models
NASA Astrophysics Data System (ADS)
Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr
2016-04-01
Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include iterative improvement of parameter assumptions and evaluation of implications of GGCM performances for intended use in the IIASA EPIC-GLOBIOM model cluster. Our approach helps targeting future efforts at improving GGCM accuracy and would achieve highest efficiency if combined with traditional field-scale evaluation and sensitivity analysis.
Xueri Dang; Chun-Ta Lai; David Y. Hollinger; Andrew J. Schauer; Jingfeng Xiao; J. William Munger; Clenton Owensby; James R. Ehleringer
2011-01-01
We evaluated an idealized boundary layer (BL) model with simple parameterizations using vertical transport information from community model outputs (NCAR/NCEP Reanalysis and ECMWF Interim Analysis) to estimate regional-scale net CO2 fluxes from 2002 to 2007 at three forest and one grassland flux sites in the United States. The BL modeling...
NASA Astrophysics Data System (ADS)
ElSaadani, M.; Quintero, F.; Goska, R.; Krajewski, W. F.; Lahmers, T.; Small, S.; Gochis, D. J.
2015-12-01
This study examines the performance of different Hydrologic models in estimating peak flows over the state of Iowa. In this study I will compare the output of the Iowa Flood Center (IFC) hydrologic model and WRF-Hydro (NFIE configuration) to the observed flows at the USGS stream gauges. During the National Flood Interoperability Experiment I explored the performance of WRF-Hydro over the state of Iowa using different rainfall products and the resulting hydrographs showed a "flashy" behavior of the model output due to lack of calibration and bad initial flows due to short model spin period. I would like to expand this study by including a second well established hydrologic model and include more rain gauge vs. radar rainfall direct comparisons. The IFC model is expected to outperform WRF-Hydro's out of the box results, however, I will test different calibration options for both the Noah-MP land surface model and RAPID, which is the routing component of the NFIE-Hydro configuration, to see if this will improve the model results. This study will explore the statistical structure of model output uncertainties across scales (as a function of drainage areas and/or stream orders). I will also evaluate the performance of different radar-based Quantitative Precipitation Estimation (QPE) products (e.g. Stage IV, MRMS and IFC's NEXRAD based radar rainfall product. Different basins will be evaluated in this study and they will be selected based on size, amount of rainfall received over the basin area and location. Basin location will be an important factor in this study due to our prior knowledge of the performance of different NEXRAD radars that cover the region, this will help observe the effect of rainfall biases on stream flows. Another possible addition to this study is to apply controlled spatial error fields to rainfall inputs and observer the propagation of these errors through the stream network.
ACIRF user's guide: Theory and examples
NASA Astrophysics Data System (ADS)
Dana, Roger A.
1989-12-01
Design and evaluation of radio frequency systems that must operate through ionospheric disturbances resulting from high altitude nuclear detonations requires an accurate channel model. This model must include the effects of high gain antennas that may be used to receive the signals. Such a model can then be used to construct realizations of the received signal for use in digital simulations of trans-ionospheric links or for use in hardware channel simulators. The FORTRAN channel model ACIRF (Antenna Channel Impulse Response Function) generates random realizations of the impulse response function at the outputs of multiple antennas. This user's guide describes the FORTRAN program ACIRF (version 2.0) that generates realizations of channel impulse response functions at the outputs of multiple antennas with arbitrary beamwidths, pointing angles, and relatives positions. This channel model is valid under strong scattering conditions when Rayleigh fading statistics apply. Both frozen-in and turbulent models for the temporal fluctuations are included in this version of ACIRF. The theory of the channel model is described and several examples are given.
Real-time quality monitoring in debutanizer column with regression tree and ANFIS
NASA Astrophysics Data System (ADS)
Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar
2018-05-01
A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.
CMUTs with high-K atomic layer deposition dielectric material insulation layer.
Xu, Toby; Tekes, Coskun; Degertekin, F
2014-12-01
Use of high-κ dielectric, atomic layer deposition (ALD) materials as an insulation layer material for capacitive micromachined ultrasonic transducers (CMUTs) is investigated. The effect of insulation layer material and thickness on CMUT performance is evaluated using a simple parallel plate model. The model shows that both high dielectric constant and the electrical breakdown strength are important for the dielectric material, and significant performance improvement can be achieved, especially as the vacuum gap thickness is reduced. In particular, ALD hafnium oxide (HfO2) is evaluated and used as an improvement over plasma-enhanced chemical vapor deposition (PECVD) silicon nitride (Six)Ny)) for CMUTs fabricated by a low-temperature, complementary metal oxide semiconductor transistor-compatible, sacrificial release method. Relevant properties of ALD HfO2) such as dielectric constant and breakdown strength are characterized to further guide CMUT design. Experiments are performed on parallel fabricated test CMUTs with 50-nm gap and 16.5-MHz center frequency to measure and compare pressure output and receive sensitivity for 200-nm PECVD Six)Ny) and 100-nm HfO2) insulation layers. Results for this particular design show a 6-dB improvement in receiver output with the collapse voltage reduced by one-half; while in transmit mode, half the input voltage is needed to achieve the same maximum output pressure.
NASA Technical Reports Server (NTRS)
Polotzky, Anthony S.; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek
1990-01-01
The development of a controller performance evaluation (CPE) methodology for multiinput/multioutput digital control systems is described. The equations used to obtain the open-loop plant, controller transfer matrices, and return-difference matrices are given. Results of applying the CPE methodology to evaluate MIMO digital flutter suppression systems being tested on an active flexible wing wind-tunnel model are presented to demonstrate the CPE capability.
Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation
NASA Astrophysics Data System (ADS)
Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir
2012-02-01
Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.
ERIC Educational Resources Information Center
Ahl, David H.
1985-01-01
The "College Explorer" is a software package (for the 64K Apple II, IBM PC, TRS-80 model III and 4 microcomputers) which aids in choosing a college. The major features of this package (manufactured by The College Board) are described and evaluated. Sample input/output is included. (JN)
COULD ETHINYL ESTRADIOL AFFECT THE POPULATION BIOLOGY OF CUNNER, TAUTOGOLABRUS ADSPERSUS
Endocrine disrupting chemicals in the environment may disturb the population dynamics of wildlife by affecting reproductive output and embryonic development of organisms. This study used a population model to evaluate whether ethinyl estradiol (EE2 could affect cunner Tautogolabr...
Evaluation of Flood Forecast and Warning in Elbe river basin - Impact of Forecaster's Strategy
NASA Astrophysics Data System (ADS)
Danhelka, Jan; Vlasak, Tomas
2010-05-01
Czech Hydrometeorological Institute (CHMI) is responsible for flood forecasting and warning in the Czech Republic. To meet that issue CHMI operates hydrological forecasting systems and publish flow forecast in selected profiles. Flood forecast and warning is an output of system that links observation (flow and atmosphere), data processing, weather forecast (especially NWP's QPF), hydrological modeling and modeled outputs evaluation and interpretation by forecaster. Forecast users are interested in final output without separating uncertainties of separate steps of described process. Therefore an evaluation of final operational forecasts was done for profiles within Elbe river basin produced by AquaLog forecasting system during period 2002 to 2008. Effects of uncertainties of observation, data processing and especially meteorological forecasts were not accounted separately. Forecast of flood levels exceedance (peak over the threshold) during forecasting period was the main criterion as flow increase forecast is of the highest importance. Other evaluation criteria included peak flow and volume difference. In addition Nash-Sutcliffe was computed separately for each time step (1 to 48 h) of forecasting period to identify its change with the lead time. Textual flood warnings are issued for administrative regions to initiate flood protection actions in danger of flood. Flood warning hit rate was evaluated at regions level and national level. Evaluation found significant differences of model forecast skill between forecasting profiles, particularly less skill was evaluated at small headwater basins due to domination of QPF uncertainty in these basins. The average hit rate was 0.34 (miss rate = 0.33, false alarm rate = 0.32). However its explored spatial difference is likely to be influenced also by different fit of parameters sets (due to different basin characteristics) and importantly by different impact of human factor. Results suggest that the practice of interactive model operation, experience and forecasting strategy differs between responsible forecasting offices. Warning is based on model outputs interpretation by hydrologists-forecaster. Warning hit rate reached 0.60 for threshold set to lowest flood stage of which 0.11 was underestimation of flood degree (miss 0.22, false alarm 0.28). Critical success index of model forecast was 0.34, while the same criteria for warning reached 0.55. We assume that the increase accounts not only to change of scale from single forecasting point to region for warning, but partly also to forecaster's added value. There is no official warning strategy preferred in the Czech Republic (f.e. tolerance towards higher false alarm rate). Therefore forecaster decision and personal strategy is of great importance. Results show quite successful warning for 1st flood level exceedance, over-warning for 2nd flood level, but under-warning for 3rd (highest) flood level. That suggests general forecaster's preference of medium level warning (2nd flood level is legally determined to be the start of the flood and flood protection activities). In conclusion human forecaster's experience and analysis skill increases flood warning performance notably. However society preference should be specifically addressed in the warning strategy definition to support forecaster's decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Baniukiewicz, P.
2010-02-22
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Twomore » weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.« less
NASA Astrophysics Data System (ADS)
Sikora, R.; Chady, T.; Baniukiewicz, P.; Caryk, M.; Piekarczyk, B.
2010-02-01
Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Two weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.
Evaluation of extra virgin olive oil stability by artificial neural network.
Silva, Simone Faria; Anjos, Carlos Alberto Rodrigues; Cavalcanti, Rodrigo Nunes; Celeghini, Renata Maria dos Santos
2015-07-15
The stability of extra virgin olive oil in polyethylene terephthalate bottles and tinplate cans stored for 6 months under dark and light conditions was evaluated. The following analyses were carried out: free fatty acids, peroxide value, specific extinction at 232 and 270 nm, chlorophyll, L(∗)C(∗)h color, total phenolic compounds, tocopherols and squalene. The physicochemical changes were evaluated by artificial neural network (ANN) modeling with respect to light exposure conditions and packaging material. The optimized ANN structure consists of 11 input neurons, 18 hidden neurons and 5 output neurons using hyperbolic tangent and softmax activation functions in hidden and output layers, respectively. The five output neurons correspond to five possible classifications according to packaging material (PET amber, PET transparent and tinplate can) and light exposure (dark and light storage). The predicted physicochemical changes agreed very well with the experimental data showing high classification accuracy for test (>90%) and training set (>85). Sensitivity analysis showed that free fatty acid content, peroxide value, L(∗)Cab(∗)hab(∗) color parameters, tocopherol and chlorophyll contents were the physicochemical attributes with the most discriminative power. Copyright © 2015 Elsevier Ltd. All rights reserved.
Salmon, P; Williamson, A; Lenné, M; Mitsopoulos-Rubens, E; Rudin-Brown, C M
2010-08-01
Safety-compromising accidents occur regularly in the led outdoor activity domain. Formal accident analysis is an accepted means of understanding such events and improving safety. Despite this, there remains no universally accepted framework for collecting and analysing accident data in the led outdoor activity domain. This article presents an application of Rasmussen's risk management framework to the analysis of the Lyme Bay sea canoeing incident. This involved the development of an Accimap, the outputs of which were used to evaluate seven predictions made by the framework. The Accimap output was also compared to an analysis using an existing model from the led outdoor activity domain. In conclusion, the Accimap output was found to be more comprehensive and supported all seven of the risk management framework's predictions, suggesting that it shows promise as a theoretically underpinned approach for analysing, and learning from, accidents in the led outdoor activity domain. STATEMENT OF RELEVANCE: Accidents represent a significant problem within the led outdoor activity domain. This article presents an evaluation of a risk management framework that can be used to understand such accidents and to inform the development of accident countermeasures and mitigation strategies for the led outdoor activity domain.
Numerical considerations in the development and implementation of constitutive models
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Imbrie, P. K.
1985-01-01
Several unified constitutive models were tested in uniaxial form by specifying input strain histories and comparing output stress histories. The purpose of the tests was to evaluate several time integration methods with regard to accuracy, stability, and computational economy. The sensitivity of the models to slight changes in input constants was also investigated. Results are presented for In100 at 1350 F and Hastelloy-X at 1800 F.
Test and evaluation of the Navy half-watt RTG. [Radioisotope Thermoelectric Generator
NASA Technical Reports Server (NTRS)
Rosell, F. E., Jr.; Lane, S. D.; Eggers, P. E.; Gawthrop, W. E.; Rouklove, P. G.; Truscello, V. C.
1976-01-01
The radioisotope thermoelectric generator (RTG) considered is to provide a continuous minimum power output of 0.5 watt at 6.0 to 8.5 volts for a minimum period of 15 years. The mechanical-electrical evaluation phase discussed involved the conduction of shock and vibration tests. The thermochemical-physical evaluation phase consisted of an analysis of the materials and the development of a thermal model. The thermoelectric evaluation phase included the accelerated testing of the thermoelectric modules.
Henne, Erik; Kesten, Steven; Herth, Felix J F
2013-01-01
A method of achieving endoscopic lung volume reduction for emphysema has been developed that utilizes precise amounts of thermal energy in the form of water vapor to ablate lung tissue. This study evaluates the energy output and implications of the commercial InterVapor system and compares it to the clinical trial system. Two methods of evaluating the energy output of the vapor systems were used, a direct energy measurement and a quantification of resultant thermal profile in a lung model. Direct measurement of total energy and the component attributable to gas (vapor energy) was performed by condensing vapor in a water bath and measuring the temperature and mass changes. Infrared images of a lung model were taken after vapor delivery. The images were quantified to characterize the thermal profile. The total energy and vapor energy of the InterVapor system was measured at various dose levels and compared to the clinical trial system at a dose of 10.0 cal/g. An InterVapor dose of 8.5 cal/g was found to have the most similar vapor energy output with the smallest associated reduction in total energy. This was supported by characterization of the thermal profile in the lung model that demonstrated the profile of InterVapor at 8.5 cal/g to not exceed the profile of the clinical trial system. Considering both total energy and vapor energy is important during the development of clinical vapor applications. For InterVapor, a closer study of both energy types justified a reduced target vapor-dosing range for lung volume reduction. The clinical implication is a potential improvement for benefiting the risk profile. Copyright © 2013 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Kapanen, Mika; Tenhunen, Mikko; Hämäläinen, Tuomo; Sipilä, Petri; Parkkinen, Ritva; Järvinen, Hannu
2006-07-01
Quality control (QC) data of radiotherapy linear accelerators, collected by Helsinki University Central Hospital between the years 2000 and 2004, were analysed. The goal was to provide information for the evaluation and elaboration of QC of accelerator outputs and to propose a method for QC data analysis. Short- and long-term drifts in outputs were quantified by fitting empirical mathematical models to the QC measurements. Normally, long-term drifts were well (<=1%) modelled by either a straight line or a single-exponential function. A drift of 2% occurred in 18 ± 12 months. The shortest drift times of only 2-3 months were observed for some new accelerators just after the commissioning but they stabilized during the first 2-3 years. The short-term reproducibility and the long-term stability of local constancy checks, carried out with a sealed plane parallel ion chamber, were also estimated by fitting empirical models to the QC measurements. The reproducibility was 0.2-0.5% depending on the positioning practice of a device. Long-term instabilities of about 0.3%/month were observed for some checking devices. The reproducibility of local absorbed dose measurements was estimated to be about 0.5%. The proposed empirical model fitting of QC data facilitates the recognition of erroneous QC measurements and abnormal output behaviour, caused by malfunctions, offering a tool to improve dose control.
Environmental impact analysis with the airspace concept evaluation system
NASA Technical Reports Server (NTRS)
Augustine, Stephen; Capozzi, Brian; DiFelici, John; Graham, Michael; Thompson, Terry; Miraflor, Raymond M. C.
2005-01-01
The National Aeronautics and Space Administration (NASA) Ames Research Center has developed the Airspace Concept Evaluation System (ACES), which is a fast-time simulation tool for evaluating Air Traffic Management (ATM) systems. This paper describes linking a capability to ACES which can analyze the environmental impact of proposed future ATM systems. This provides the ability to quickly evaluate metrics associated with environmental impacts of aviation for inclusion in multi-dimensional cost-benefit analysis of concepts for evolution of the National Airspace System (NAS) over the next several decades. The methodology used here may be summarized as follows: 1) Standard Federal Aviation Administration (FAA) noise and emissions-inventory models, the Noise Impact Routing System (NIRS) and the Emissions and Dispersion Modeling System (EDMS), respectively, are linked to ACES simulation outputs; 2) appropriate modifications are made to ACES outputs to incorporate all information needed by the environmental models (e.g., specific airframe and engine data); 3) noise and emissions calculations are performed for all traffic and airports in the study area for each of several scenarios, as simulated by ACES; and 4) impacts of future scenarios are compared to the current NAS baseline scenario. This paper also provides the results of initial end-to-end, proof-of-concept runs of the integrated ACES and environmental-modeling capability. These preliminary results demonstrate that if no growth is likely to be impeded by significant environmental impacts that could negatively affect communities throughout the nation.
NASA Astrophysics Data System (ADS)
Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah
2015-04-01
Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.
Quang V. Cao
2010-01-01
Individual-tree models are flexible and can perform well in predicting tree survival and diameter growth for a certain growing period. However, the resulting stand-level outputs often suffer from accumulation of errors and subsequently cannot compete with predictions from whole-stand models, especially when the projection period lengthens. Evaluated in this study were...
Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.
Zhang, Xuyang; Goh, Kean S
2015-11-01
Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2015-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
NASA Technical Reports Server (NTRS)
Rinehart, Aidan W.; Simon, Donald L.
2014-01-01
This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.
Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.
Nascimento, L F C; Rizol, P M S R; Peneluppi, A P
2014-08-29
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.
Nascimento, L F C; Rizol, P M S R; Peneluppi, A P
2014-11-01
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data.
Florescu, Dorian; Coca, Daniel
2018-03-01
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.
Updated Model of the Solar Energetic Proton Environment in Space
NASA Astrophysics Data System (ADS)
Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami
2018-05-01
The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).
Hodson, Nicholas A; Dunne, Stephen M; Pankhurst, Caroline L
2005-04-01
Dental curing lights are vulnerable to contamination with oral fluids during routine intra-oral use. This controlled study aimed to evaluate whether or not disposable transparent barriers placed over the light-guide tip would affect light output intensity or the subsequent depth of cure of a composite restoration. The impact on light intensity emitted from high-, medium- and low-output light-cure units in the presence of two commercially available disposable infection-control barriers was evaluated against a no-barrier control. Power density measurements from the three intensity light-cure units were recorded with a radiometer, then converted to a digital image using an intra-oral camera and values determined using a commercial computer program. For each curing unit, the measurements were repeated on ten separate occasions with each barrier and the control. Depth of cure was evaluated using a scrape test in a natural tooth model. At each level of light output, the two disposable barriers produced a significant reduction in the mean power density readings compared to the no-barrier control (P<0.005). The cure sleeve inhibited light output to a greater extent than either the cling film or the control (P<0.005). Only composite restorations light-activated by the high level unit demonstrated a small but significant decrease in the depth of cure compared to the control (P<0.05). Placing disposable barriers over the light-guide tip reduced the light intensity from all three curing lights. There was no impact on depth of cure except for the high-output light, where a small decrease in cure depth was noted but this was not considered clinically significant. Disposable barriers can be recommended for use with light-cure lights.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Transient multivariable sensor evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Heifetz, Alexander
A method and system for performing transient multivariable sensor evaluation. The method and system includes a computer system for identifying a model form, providing training measurement data, generating a basis vector, monitoring system data from sensor, loading the system data in a non-transient memory, performing an estimation to provide desired data and comparing the system data to the desired data and outputting an alarm for a defective sensor.
Modelling and simulation of fuel cell dynamics for electrical energy usage of Hercules airplanes.
Radmanesh, Hamid; Heidari Yazdi, Seyed Saeid; Gharehpetian, G B; Fathi, S H
2014-01-01
Dynamics of proton exchange membrane fuel cells (PEMFC) with hydrogen storage system for generating part of Hercules airplanes electrical energy is presented. Feasibility of using fuel cell (FC) for this airplane is evaluated by means of simulations. Temperature change and dual layer capacity effect are considered in all simulations. Using a three-level 3-phase inverter, FC's output voltage is connected to the essential bus of the airplane. Moreover, it is possible to connect FC's output voltage to airplane DC bus alternatively. PID controller is presented to control flow of hydrogen and oxygen to FC and improve transient and steady state responses of the output voltage to load disturbances. FC's output voltage is regulated via an ultracapacitor. Simulations are carried out via MATLAB/SIMULINK and results show that the load tracking and output voltage regulation are acceptable. The proposed system utilizes an electrolyser to generate hydrogen and a tank for storage. Therefore, there is no need for batteries. Moreover, the generated oxygen could be used in other applications in airplane.
Modelling and Simulation of Fuel Cell Dynamics for Electrical Energy Usage of Hercules Airplanes
Radmanesh, Hamid; Heidari Yazdi, Seyed Saeid; Gharehpetian, G. B.; Fathi, S. H.
2014-01-01
Dynamics of proton exchange membrane fuel cells (PEMFC) with hydrogen storage system for generating part of Hercules airplanes electrical energy is presented. Feasibility of using fuel cell (FC) for this airplane is evaluated by means of simulations. Temperature change and dual layer capacity effect are considered in all simulations. Using a three-level 3-phase inverter, FC's output voltage is connected to the essential bus of the airplane. Moreover, it is possible to connect FC's output voltage to airplane DC bus alternatively. PID controller is presented to control flow of hydrogen and oxygen to FC and improve transient and steady state responses of the output voltage to load disturbances. FC's output voltage is regulated via an ultracapacitor. Simulations are carried out via MATLAB/SIMULINK and results show that the load tracking and output voltage regulation are acceptable. The proposed system utilizes an electrolyser to generate hydrogen and a tank for storage. Therefore, there is no need for batteries. Moreover, the generated oxygen could be used in other applications in airplane. PMID:24782664
Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2015-04-01
Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.
Simulation and performance of brushless dc motor actuators
NASA Astrophysics Data System (ADS)
Gerba, A., Jr.
1985-12-01
The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.
P.C. Stoy; M.C. Dietze; A.D. Richardson; R. Vargas; A.G. Barr; R.S. Anderson; M.A. Arain; I.T. Baker; T.A. Black; J.M. Chen; R.B. Cook; C.M. Gough; R.F. Grant; D.Y. Hollinger; R.C. Izaurralde; C.J. Kucharik; P. Lafleur; B.E. Law; S. Liu; E. Lokupitiya; Y. Luo; J. W. Munger; C. Peng; B. Poulter; D.T. Price; D. M. Ricciuto; W. J. Riley; A. K. Sahoo; K. Schaefer; C.R. Schwalm; H. Tian; H. Verbeeck; E. Weng
2013-01-01
Earth system processes exhibit complex patterns across time, as do the models that seek to replicate these processes. Model output may or may not be significantly related to observations at different times and on different frequencies. Conventional model diagnostics provide an aggregate view of model-data agreement, but usually do not identify the time and frequency...
NASA Astrophysics Data System (ADS)
Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd
2015-12-01
Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network DEA model can give a superior benchmarking power over the conventional models.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
NASA Astrophysics Data System (ADS)
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2018-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.
Java-based Graphical User Interface for MAVERIC-II
NASA Technical Reports Server (NTRS)
Seo, Suk Jai
2005-01-01
A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.
Empirical measurement and model validation of infrared spectra of contaminated surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay
2015-05-01
Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
NASA Astrophysics Data System (ADS)
Schroeder, R.; Jacobs, J. M.; Vuyovich, C.; Cho, E.; Tuttle, S. E.
2017-12-01
Each spring the Red River basin (RRB) of the North, located between the states of Minnesota and North Dakota and southern Manitoba, is vulnerable to dangerous spring snowmelt floods. Flat terrain, low permeability soils and a lack of satisfactory ground observations of snow pack conditions make accurate predictions of the onset and magnitude of major spring flood events in the RRB very challenging. This study investigated the potential benefit of using gridded snow water equivalent (SWE) products from passive microwave satellite missions and model output simulations to improve snowmelt flood predictions in the RRB using NOAA's operational Community Hydrologic Prediction System (CHPS). Level-3 satellite SWE products from AMSR-E, AMSR2 and SSM/I, as well as SWE computed from Level-2 brightness temperatures (Tb) measurements, including model output simulations of SWE from SNODAS and GlobSnow-2 were chosen to support the snowmelt modeling exercises. SWE observations were aggregated spatially (i.e. to the NOAA North Central River Forecast Center forecast basins) and temporally (i.e. by obtaining daily screened and weekly unscreened maximum SWE composites) to assess the value of daily satellite SWE observations relative to weekly maximums. Data screening methods removed the impacts of snow melt and cloud contamination on SWE and consisted of diurnal SWE differences and a temperature-insensitive polarization difference ratio, respectively. We examined the ability of the satellite and model output simulations to capture peak SWE and investigated temporal accuracies of screened and unscreened satellite and model output SWE. The resulting SWE observations were employed to update the SNOW-17 snow accumulation and ablation model of CHPS to assess the benefit of using temporally and spatially consistent SWE observations for snow melt predictions in two test basins in the RRB.
NASA Astrophysics Data System (ADS)
Olesen, M.; Christensen, J. H.; Boberg, F.
2016-12-01
Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5Climate change affects the Greenlandic society both advantageously and disadvantageously. Changes in temperature and precipitation patterns may result in changes in a number of derived society related climate indices, such as the length of growing season or the number of annual dry days or a combination of the two - indices of substantial importance to society in a climate adaptation context.Detailed climate indices require high resolution downscaling. We have carried out a very high resolution (5 km) simulation with the regional climate model HIRHAM5, forced by the global model EC-Earth. Evaluation of RCM output is usually done with an ensemble of downscaled output with multiple RCM's and GCM's. Here we have introduced and tested a new technique; a translation of the robustness of an ensemble of GCM models from CMIP5 into the specific index from the HIRHAM5 downscaling through a correlation between absolute temperatures and its corresponding index values from the HIRHAM5 output.The procedure is basically conducted in two steps: First, the correlation between temperature and a given index for the HIRHAM5 simulation by a best fit to a second order polynomial is identified. Second, the standard deviation from the CMIP5 simulations is introduced to show the corresponding standard deviation of the index from the HIRHAM5 run. The change of specific climate indices due to global warming will then be possible to evaluate elsewhere corresponding to the change in absolute temperature.Results based on selected indices with focus on the future climate in Greenland calculated for the rcp4.5 and rcp8.5 scenarios will be presented.
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...
2016-01-01
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia
Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less
Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee
2011-01-01
The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshimura, Ann S.; Brandt, Larry D.
2009-11-01
The NUclear EVacuation Analysis Code (NUEVAC) has been developed by Sandia National Laboratories to support the analysis of shelter-evacuate (S-E) strategies following an urban nuclear detonation. This tool can model a range of behaviors, including complex evacuation timing and path selection, as well as various sheltering or mixed evacuation and sheltering strategies. The calculations are based on externally generated, high resolution fallout deposition and plume data. Scenario setup and calculation outputs make extensive use of graphics and interactive features. This software is designed primarily to produce quantitative evaluations of nuclear detonation response options. However, the outputs have also proven usefulmore » in the communication of technical insights concerning shelter-evacuate tradeoffs to urban planning or response personnel.« less
Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.
2017-01-01
Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.
Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
NASA Astrophysics Data System (ADS)
Darko, Deborah; Adjei, Kwaku A.; Appiah-Adjei, Emmanuel K.; Odai, Samuel N.; Obuobie, Emmanuel; Asmah, Ruby
2018-06-01
The extent to which statistical bias-adjusted outputs of two regional climate models alter the projected change signals for the mean (and extreme) rainfall and temperature over the Volta Basin is evaluated. The outputs from two regional climate models in the Coordinated Regional Climate Downscaling Experiment for Africa (CORDEX-Africa) are bias adjusted using the quantile mapping technique. Annual maxima rainfall and temperature with their 10- and 20-year return values for the present (1981-2010) and future (2051-2080) climates are estimated using extreme value analyses. Moderate extremes are evaluated using extreme indices (viz. percentile-based, duration-based, and intensity-based). Bias adjustment of the original (bias-unadjusted) models improves the reproduction of mean rainfall and temperature for the present climate. However, the bias-adjusted models poorly reproduce the 10- and 20-year return values for rainfall and maximum temperature whereas the extreme indices are reproduced satisfactorily for the present climate. Consequently, projected changes in rainfall and temperature extremes were weak. The bias adjustment results in the reduction of the change signals for the mean rainfall while the mean temperature signals are rather magnified. The projected changes for the original mean climate and extremes are not conserved after bias adjustment with the exception of duration-based extreme indices.
Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela
2018-02-01
The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
RESTSIM: A Simulation Model That Highlights Decision Making under Conditions of Uncertainty.
ERIC Educational Resources Information Center
Zinkhan, George M.; Taylor, James R.
1983-01-01
Describes RESTSIM, an interactive computer simulation program for graduate and upper-level undergraduate management, marketing, and retailing courses, which introduces naive users to simulation as a decision support technique, and provides a vehicle for studying various statistical procedures for evaluating simulation output. (MBR)
Determining and Communicating the Value of the Special Library.
ERIC Educational Resources Information Center
Matthews, Joseph R.
2003-01-01
Discusses performance measures for libraries that will indicate the goodness of the library and its services. Highlights include a general evaluation model that includes input, process, output, and outcome measures; balanced scorecard approach that includes financial perspectives; focusing on strategy; strategies for change; user criteria for…
NASA Astrophysics Data System (ADS)
Byrd, K. B.; Kreitler, J.; Labiosa, W.
2010-12-01
A scenario represents an account of a plausible future given logical assumptions about how conditions change over discrete bounds of space and time. Development of multiple scenarios provides a means to identify alternative directions of urban growth that account for a range of uncertainty in human behavior. Interactions between human and natural processes may be studied by coupling urban growth scenario outputs with biophysical change models; if growth scenarios encompass a sufficient range of alternative futures, scenario assumptions serve to constrain the uncertainty of biophysical models. Spatially explicit urban growth models (map-based) produce output such as distributions and densities of residential or commercial development in a GIS format that can serve as input to other models. Successful fusion of growth model outputs with other model inputs requires that both models strategically address questions of interest, incorporate ecological feedbacks, and minimize error. The U.S. Geological Survey (USGS) Puget Sound Ecosystem Portfolio Model (PSEPM) is a decision-support tool that supports land use and restoration planning in Puget Sound, Washington, a 35,500 sq. km region. The PSEPM couples future scenarios of urban growth with statistical, process-based and rule-based models of nearshore biophysical changes and ecosystem services. By using a multi-criteria approach, the PSEPM identifies cross-system and cumulative threats to the nearshore environment plus opportunities for conservation and restoration. Sub-models that predict changes in nearshore biophysical condition were developed and existing models were integrated to evaluate three growth scenarios: 1) Status Quo, 2) Managed Growth, and 3) Unconstrained Growth. These decadal scenarios were developed and projected out to 2060 at Oregon State University using the GIS-based ENVISION model. Given land management decisions and policies under each growth scenario, the sub-models predicted changes in 1) fecal coliform in shellfish growing areas, 2) sediment supply to beaches, 3) State beach recreational visits, 4) eelgrass habitat suitability, 5) forage fish habitat suitability, and 6) nutrient loadings. In some cases thousands of shoreline units were evaluated with multiple predictive models, creating a need for streamlined and consistent database development and data processing. Model development over multiple disciplines demonstrated the challenge of merging data types from multiple sources that were inconsistent in spatial and temporal resolution, classification schemes, and topology. Misalignment of data in space and time created potential for error and misinterpretation of results. This effort revealed that the fusion of growth scenarios and biophysical models requires an up-front iterative adjustment of both scenarios and models so that growth model outputs provide the needed input data in the correct format. Successful design of data flow across models that includes feedbacks between human and ecological systems was found to enhance the use of the final data product for decision making.
Adamatzky, Andrew
2014-08-01
In experimental laboratory studies we evaluate a possibility of making electrical wires from living plants. In scoping experiments we use lettuce seedlings as a prototype model of a plant wire. We approximate an electrical potential transfer function by applying direct current voltage to the lettuce seedlings and recording output voltage. We analyse oscillation frequencies of the output potential and assess noise immunity of the plant wires. Our findings will be used in future designs of self-growing wetware circuits and devices, and integration of plant-based electronic components into future and emergent bio-hybrid systems. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Zong, Fan; Wang, Lifang
2017-01-01
University scientific research ability is an important indicator to express the strength of universities. In this paper, the evaluation of university scientific research ability is investigated based on the output of sci-tech papers. Four university alliances from North America, UK, Australia, and China, are selected as the case study of the university scientific research evaluation. Data coming from Thomson Reuters InCites are collected to support the evaluation. The work has contributed new framework to the issue of university scientific research ability evaluation. At first, we have established a hierarchical structure to show the factors that impact the evaluation of university scientific research ability. Then, a new MCDM method called D-AHP model is used to implement the evaluation and ranking of different university alliances, in which a data-driven approach is proposed to automatically generate the D numbers preference relations. Next, a sensitivity analysis has been given to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China's scientific research ability.
Comparisons between data assimilated HYCOM output and in situ Argo measurements in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Wilson, E. A.; Riser, S.
2014-12-01
This study evaluates the performance of data assimilated Hybrid Coordinate Ocean Model (HYCOM) output for the Bay of Bengal from September 2008 through July 2013. We find that while HYCOM assimilates Argo data, the model still suffers from significant temperature and salinity biases in this region. These biases are most severe in the northern Bay of Bengal, where the model tends to be too saline near the surface and too fresh at depth. The maximum magnitude of these biases is approximately 0.6 PSS. We also find that the model's salinity biases have a distinct seasonal cycle. The most problematic periods are the months following the summer monsoon (Oct-Jan). HYCOM's near surface temperature estimates compare more favorably with Argo, but significant errors exist at deeper levels. We argue that optimal interpolation will tend to induce positive salinity biases in the northern regions of the Bay. Further, we speculate that these biases are introduced when the model relaxes to climatology and assimilates real-time data.
Measuring the efficiency of zakat collection process using data envelopment analysis
NASA Astrophysics Data System (ADS)
Hamzah, Ahmad Aizuddin; Krishnan, Anath Rau
2016-10-01
It is really necessary for each zakat institution in the nation to timely measure and understand their efficiency in collecting zakat for the sake of continuous betterment. Pusat Zakat Sabah, Malaysia which has kicked off its operation in early of 2007, is not excused from this obligation as well. However, measuring the collection efficiency is not a very easy task as it usually incorporates the consideration of multiple inputs or/and outputs. This paper sequentially employed three data envelopment analysis models, namely Charnes-Cooper-Rhodes (CCR) primal model, CCR dual model, and slack based model to quantitatively evaluate the efficiency of zakat collection in Sabah across the year of 2007 up to 2015 by treating each year as a decision making unit. The three models were developed based on two inputs (i.e. number of zakat branches and number of staff) and one output (i.e. total collection). The causes for not achieving efficiency and the suggestions on how the efficiency in each year could have been improved were disclosed.
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, Carl G.
2008-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering-level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. The "auxiliary profile" option is one new feature of Mars-GRAM 2005. This option uses an input file of temperature and density versus altitude to replace the mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. Any source of data or alternate model output can be used to generate an auxiliary profile. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) and a global Thermal Emission Spectrometer (TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude-longitude bins and 15 degree Ls bins, for each of three Mars years of TES nadir data. The Mars Science Laboratory (MSL) sites are used as a sample of how Mars-GRAM' could be a valuable tool for planning of future Mars entry probe missions. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate MSL landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.
Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner; Mary Beth Adams; Douglas Burns; Ivan Fernandez; Myron J. Mitchell; James B. Shanley
2016-01-01
A cross-site analysis was conducted on seven diverse, forested watersheds in the northeastern United States to evaluate hydrological responses (evapotranspiration, soil moisture, seasonal and annual streamflow, and water stress) to projections of future climate. We used output from four atmosphereâocean general circulation models (AOGCMs; CCSM4, HadGEM2-CC, MIROC5, and...
Chapter 5: Application of state-and-transition models to evaluate wildlife habitat
Anita T. Morzillo; Pamela Comeleo; Blair Csuti; Stephanie Lee
2014-01-01
Wildlife habitat analysis often is a central focus of natural resources management and policy. State-and-transition models (STMs) allow for simulation of landscape level ecological processes, and for managers to test âwhat ifâ scenarios of how those processes may affect wildlife habitat. This chapter describes the methods used to link STM output to wildlife habitat to...
Ex-ORISKANY Artificial Reef Project: Ecological Risk Assessment
2006-01-25
preferences used by PRAM and the Trophic Level determined by diet for each compartment modeled in the food chain...grouping organisms according to their habitat and diet preferences , PRAM also provided output to evaluate exposure point concentrations for the pelagic...dietary preferences used by PRAM (version 1.4C) and the Trophic Level determined by diet for each compartment modeled in the food chain. PRAM Default
Jeremy S. Fried; Theresa B. Jain; Sara Loreno; Robert F. Keefe; Conor K. Bell
2017-01-01
The BioSum modeling framework summarizes current and prospective future forest conditions under alternative management regimes along with their costs, revenues and product yields. BioSum translates Forest Inventory and Analysis (FIA) data for input to the Forest Vegetation Simulator (FVS), summarizes FVS outputs for input to the treatment operations cost model (OpCost...
Maureen C. Kennedy; Donald McKenzie
2010-01-01
Fire-scarred trees provide a deep temporal record of historical fire activity, but identifying the mechanisms therein that controlled landscape fire patterns is not straightforward. We use a spatially correlated metric for fire co-occurrence between pairs of trees (the Sørensen distance variogram), with output from a neutral model for fire history, to infer the...
Application of SIGGS to Project PRIME: A General Systems Approach to Evaluation of Mainstreaming.
ERIC Educational Resources Information Center
Frick, Ted
The use of the systems approach in educational inquiry is not new, and the models of input/output, input/process/product, and cybernetic systems have been widely used. The general systems model is an extension of all these, adding the dimension of environmental influence on the system as well as system influence on the environment. However, if the…
Evaluating digital libraries in the health sector. Part 2: measuring impacts and outcomes.
Cullen, Rowena
2004-03-01
This is the second part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. Part 1 focuses on approaches to evaluation that have been proposed for mainstream digital information services. This paper investigates evaluative models developed for some innovative digital library projects, and some major national and international electronic health information projects. The value of ethnographic methods to provide qualitative data to explore outcomes, adding to quantitative approaches based on inputs and outputs is discussed. The paper concludes that new 'post-positivist' models of evaluation are needed to cover all the dimensions of the digital library in the health sector, and some ways of doing this are outlined.
Lamp pumped Nd:YAG laser. Space-qualifiable Nd:YAG laser for optical communications
NASA Technical Reports Server (NTRS)
Ward, K. B.
1973-01-01
Results are given of a program concerned with the design, fabrication, and evaluation of alkali pump lamps for eventual use in a space qualified Nd:YAG laser system. The study included evaluation of 2mm through 6mm bore devices. Primary emphasis was placed upon the optimization of the 4mm bore lamp and later on the 6mm bore lamp. As part of this effort, reference was made to the Sylvania work concerned with the theoretical modeling of the Nd:YAG laser. With the knowledge gained, a projection of laser performance was made based upon realistic lamp parameters which should easily be achieved during following developmental efforts. Measurements were made on the lamp performance both in and out of the cavity configuration. One significant observation was that for a constant vapor pressure device, the spectral and fluorescent output did not vary for vacuum or argon environment. Therefore, the laser can be operated in an inert environment (eg. argon) with no degradation in output. Laser output of 3.26 watts at 430 watts input was obtained for an optimized 4mm bore lamp.
Time does not cause forgetting in short-term serial recall.
Lewandowsky, Stephan; Duncan, Matthew; Brown, Gordon D A
2004-10-01
Time-based theories expect memory performance to decline as the delay between study and recall of an item increases. The assumption of time-based forgetting, central to many models of serial recall, underpins their key behaviors. Here we compare the predictions of time-based and event-based models by simulation and test them in two experiments using a novel manipulation of the delay between study and retrieval. Participants were trained, via corrective feedback, to recall at different speeds, thus varying total recall time from 6 to 10 sec. In the first experiment, participants used the keyboard to enter their responses but had to repeat a word (called the suppressor) aloud during recall to prevent rehearsal. In the second experiment, articulation was again required, but recall was verbal and was paced by the number of repetitions of the suppressor in between retrieval of items. In both experiments, serial position curves for all retrieval speeds overlapped, and output time had little or no effect. Comparative evaluation of a time-based and an event-based model confirmed that these results present a particular challenge to time-based approaches. We conclude that output interference, rather than output time, is critical in serial recall.
Advanced chemical oxygen iodine lasers for novel beam generation
NASA Astrophysics Data System (ADS)
Wu, Kenan; Zhao, Tianliang; Huai, Ying; Jin, Yuqi
2018-03-01
Chemical oxygen iodine laser, or COIL, is an impressive type of chemical laser that emits high power beam with good atmospheric transmissivity. Chemical oxygen iodine lasers with continuous-wave plane wave output are well-developed and are widely adopted in directed energy systems in the past several decades. Approaches of generating novel output beam based on chemical oxygen iodine lasers are explored in the current study. Since sophisticated physical processes including supersonic flowing of gaseous active media, chemical reacting of various species, optical power amplification, as well as thermal deformation and vibration of mirrors take place in the operation of COIL, a multi-disciplinary model is developed for tracing the interacting mechanisms and evaluating the performance of the proposed laser architectures. Pulsed output mode with repetition rate as high as hundreds of kHz, pulsed output mode with low repetition rate and high pulse energy, as well as novel beam with vector or vortex feature can be obtained. The results suggest potential approaches for expanding the applicability of chemical oxygen iodine lasers.
Economic Impacts of Wind Turbine Development in U.S. Counties
DOE Office of Scientific and Technical Information (OSTI.GOV)
J., Brown; B., Hoen; E., Lantz
2011-07-25
The objective is to address the research question using post-project construction, county-level data, and econometric evaluation methods. Wind energy is expanding rapidly in the United States: Over the last 4 years, wind power has contributed approximately 35 percent of all new electric power capacity. Wind power plants are often developed in rural areas where local economic development impacts from the installation are projected, including land lease and property tax payments and employment growth during plant construction and operation. Wind energy represented 2.3 percent of the U.S. electricity supply in 2010, but studies show that penetrations of at least 20 percentmore » are feasible. Several studies have used input-output models to predict direct, indirect, and induced economic development impacts. These analyses have often been completed prior to project construction. Available studies have not yet investigated the economic development impacts of wind development at the county level using post-construction econometric evaluation methods. Analysis of county-level impacts is limited. However, previous county-level analyses have estimated operation-period employment at 0.2 to 0.6 jobs per megawatt (MW) of power installed and earnings at $9,000/MW to $50,000/MW. We find statistically significant evidence of positive impacts of wind development on county-level per capita income from the OLS and spatial lag models when they are applied to the full set of wind and non-wind counties. The total impact on annual per capita income of wind turbine development (measured in MW per capita) in the spatial lag model was $21,604 per MW. This estimate is within the range of values estimated in the literature using input-output models. OLS results for the wind-only counties and matched samples are similar in magnitude, but are not statistically significant at the 10-percent level. We find a statistically significant impact of wind development on employment in the OLS analysis for wind counties only, but not in the other models. Our estimates of employment impacts are not precise enough to assess the validity of employment impacts from input-output models applied in advance of wind energy project construction. The analysis provides empirical evidence of positive income effects at the county level from cumulative wind turbine development, consistent with the range of impacts estimated using input-output models. Employment impacts are less clear.« less
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tegen, S.
This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less
The influence of habitat fragmentation on multiple plant-animal interactions and plant reproduction.
Brudvig, Lars A; Damschen, Ellen I; Haddad, Nick M; Levey, Douglas J; Tewksbury, Joshua J
2015-10-01
Despite broad recognition that habitat loss represents the greatest threat to the world's biodiyersity, a mechanistic understanding of how habitat loss and associated fragmentation affect ecological systems has proven remarkably challenging. The challenge stems from the multiple interdependent ways that landscapes change following fragmentation and the ensuing complex impacts on populations and communities of interacting species. We confronted these challenges by evaluating how fragmentation affects individual plants through interactions with animals, across five herbaceous species native to longleaf pine savannas. We created a replicated landscape experiment that provides controlled tests of three major fragmentation effects (patch isolation, patch shape [i.e., edge-to-area ratio], and distance to edge), established experimental founder populations of the five species to control for spatial distributions and densities of individual plants, and employed structural equation modeling to evaluate the effects of fragmentation on plant reproductive output and the degree to which these impacts are mediated through altered herbivory, pollination, or pre-dispersal seed predation. Across species, the most consistent response to fragmentation was a reduction in herbivory. Herbivory, however, had little impact.on plant reproductive output, and thus we found little evidence for any resulting benefit to plants in fragments. In contrast, fragmentation rarely impacted pollination or pre-dispersal seed predation, but both of these interactions had strong and consistent impacts on plant reproductive output. As a result, our models robustly predicted plant reproductive output (r2 = 0.52-0.70), yet due to the weak effects of fragmentation on pollination and pre-dispersal seed predation, coupled with the weak effect of herbivory on plant reproduction, the effects of fragmentation on reproductive output were generally small in magnitude and inconsistent. This work provides mechanistic insight into landscape-scale variation in plant reproductive success, the relative importance of plant-animal interactions for structuring these dynamics, and the nuanced nature of how habitat fragmentation can affect populations and communities of interacting species.
Library Programs. Evaluating Federally Funded Public Library Programs.
ERIC Educational Resources Information Center
Office of Educational Research and Improvement (ED), Washington, DC.
Following an introduction by Betty J. Turock, nine reports examine key issues in library evaluation: (1) "Output Measures and the Evaluation Process" (Nancy A. Van House) describes measurement as a concept to be understood in the larger context of planning and evaluation; (2) "Adapting Output Measures to Program Evaluation"…
Librarian's Handbook for Costing Network Services.
ERIC Educational Resources Information Center
Western Interstate Library Coordinating Organization, Boulder, CO.
This handbook, using an input-process-output model, provides library managers with a method to evaluate network service offerings and decide which would be cost-beneficial to procure. The scope of services addressed encompasses those network services based on an automated bibliographic data base intended primarily for cataloging support. Sections…
Direct solar-pumped iodine laser amplifier
NASA Technical Reports Server (NTRS)
Han, Kwang S.; Kim, K. H.; Stock, L. V.
1987-01-01
The improvement on the collection system of the Tarmarack Solar Simulator beam was attemped. The basic study of evaluating the solid state laser materials for the solar pumping and also the work to construct a kinetic model algorithm for the flashlamp pumped iodine lasers were carried out. It was observed that the collector cone worked better than the lens assembly in order to collect the solar simulator beam and to focus it down to a strong power density. The study on the various laser materials and their lasing characteristics shows that the neodymium and chromium co-doped gadolinium scandium gallium garnet (Nr:Cr:GSGG) may be a strong candidate for the high power solar pumped solid state laser crystal. On the other hand the improved kinetic modeling for the flashlamp pumped iodine laser provides a good agreement between the theoretical model and the experimental data on the laser power output, and predicts the output parameters of a solar pumped iodine laser.
A computer program to trace seismic ray distribution in complex two-dimensional geological models
Yacoub, Nazieh K.; Scott, James H.
1970-01-01
A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.
CMUTs with High-K Atomic Layer Deposition Dielectric Material Insulation Layer
Xu, Toby; Tekes, Coskun; Degertekin, F. Levent
2014-01-01
Use of high-κ dielectric, atomic layer deposition (ALD) materials as an insulation layer material for capacitive micromachined ultrasonic transducers (CMUTs) is investigated. The effect of insulation layer material and thickness on CMUT performance is evaluated using a simple parallel plate model. The model shows that both high dielectric constant and the electrical breakdown strength are important for the dielectric material, and significant performance improvement can be achieved, especially as the vacuum gap thickness is reduced. In particular, ALD hafnium oxide (HfO2) is evaluated and used as an improvement over plasma-enhanced chemical vapor deposition (PECVD) silicon nitride (SixNy) for CMUTs fabricated by a low-temperature, complementary metal oxide semiconductor transistor-compatible, sacrificial release method. Relevant properties of ALD HfO2 such as dielectric constant and breakdown strength are characterized to further guide CMUT design. Experiments are performed on parallel fabricated test CMUTs with 50-nm gap and 16.5-MHz center frequency to measure and compare pressure output and receive sensitivity for 200-nm PECVD SixNy and 100-nm HfO2 insulation layers. Results for this particular design show a 6-dB improvement in receiver output with the collapse voltage reduced by one-half; while in transmit mode, half the input voltage is needed to achieve the same maximum output pressure. PMID:25474786
Community models for wildlife impact assessment: a review of concepts and approaches
Schroeder, Richard L.
1987-01-01
The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.
An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics
Eskinazi, Ilan
2016-01-01
Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
ARM Cloud Radar Simulator Package for Global Climate Models Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuying; Xie, Shaocheng
It has been challenging to directly compare U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ground-based cloud radar measurements with climate model output because of limitations or features of the observing processes and the spatial gap between model and the single-point measurements. To facilitate the use of ARM radar data in numerical models, an ARM cloud radar simulator was developed to converts model data into pseudo-ARM cloud radar observations that mimic the instrument view of a narrow atmospheric column (as compared to a large global climate model [GCM] grid-cell), thus allowing meaningful comparison between model outputmore » and ARM cloud observations. The ARM cloud radar simulator value-added product (VAP) was developed based on the CloudSat simulator contained in the community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP) (Bodas-Salcedo et al., 2011), which has been widely used in climate model evaluation with satellite data (Klein et al., 2013, Zhang et al., 2010). The essential part of the CloudSat simulator is the QuickBeam radar simulator that is used to produce CloudSat-like radar reflectivity, but is capable of simulating reflectivity for other radars (Marchand et al., 2009; Haynes et al., 2007). Adapting QuickBeam to the ARM cloud radar simulator within COSP required two primary changes: one was to set the frequency to 35 GHz for the ARM Ka-band cloud radar, as opposed to 94 GHz used for the CloudSat W-band radar, and the second was to invert the view from the ground to space so as to attenuate the beam correctly. In addition, the ARM cloud radar simulator uses a finer vertical resolution (100 m compared to 500 m for CloudSat) to resolve the more detailed structure of clouds captured by the ARM radars. The ARM simulator has been developed following the COSP workflow (Figure 1) and using the capabilities available in COSP wherever possible. The ARM simulator is written in Fortran 90, just as is the COSP. It is incorporated into COSP to facilitate use by the climate modeling community. In order to evaluate simulator output, the observational counterpart of the simulator output, radar reflectivity-height histograms (CFAD) is also generated from the ARM observations. This report includes an overview of the ARM cloud radar simulator VAP and the required simulator-oriented ARM radar data product (radarCFAD) for validating simulator output, as well as a user guide for operating the ARM radar simulator VAP.« less
Description and modelling of the solar-hydrogen-biogas-fuel cell system in GlashusEtt
NASA Astrophysics Data System (ADS)
Hedström, L.; Wallmark, C.; Alvfors, P.; Rissanen, M.; Stridh, B.; Ekman, J.
The need to reduce pollutant emissions and utilise the world's available energy resources more efficiently has led to increased attention towards e.g. fuel cells, but also to other alternative energy solutions. In order to further understand and evaluate the prerequisites for sustainable and energy-saving systems, ABB and Fortum have equipped an environmental information centre, located in Hammarby Sjöstad, Stockholm, Sweden, with an alternative energy system. The system is being used to demonstrate and evaluate how a system based on fuel cells and solar cells can function as a complement to existing electricity and heat production. The stationary energy system is situated on the top level of a three-floor glass building and is open to the public. The alternative energy system consists of a fuel cell system, a photovoltaic (PV) cell array, an electrolyser, hydrogen storage tanks, a biogas burner, dc/ac inverters, heat exchangers and an accumulator tank. The fuel cell system includes a reformer and a polymer electrolyte fuel cell (PEFC) with a maximum rated electrical output of 4 kW el and a maximum thermal output of 6.5 kW th. The fuel cell stack can be operated with reformed biogas, or directly using hydrogen produced by the electrolyser. The cell stack in the electrolyser consists of proton exchange membrane (PEM) cells. To evaluate different automatic control strategies for the system, a simplified dynamic model has been developed in MATLAB Simulink. The model based on measurement data taken from the actual system. The evaluation is based on demand curves, investment costs, electricity prices and irradiation. Evaluation criteria included in the model are electrical and total efficiencies as well as economic parameters.
NASA Technical Reports Server (NTRS)
Harman, R.; Blejer, D.
1990-01-01
The requirements and mathematical specifications for the Gamma Ray Observatory (GRO) Dynamics Simulator are presented. The complete simulator system, which consists of the profie subsystem, simulation control and input/output subsystem, truth model subsystem, onboard computer model subsystem, and postprocessor, is described. The simulator will be used to evaluate and test the attitude determination and control models to be used on board GRO under conditions that simulate the expected in-flight environment.
A framework to analyze emissions implications of ...
Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future uncertainty in regulations and evaluate resulting emissions growth patterns. The framework integrates EPA’s energy systems model with an economic Input-Output (I/O) Life Cycle Assessment model. The EPAUS9r MARKAL database is assembled from a set of technologies to represent the U.S. energy system within MARKAL bottom-up technology rich energy modeling framework. The general state of the economy and consequent demands for goods and services from these sectors are taken exogenously in MARKAL. It is important to characterize exogenous inputs about the economy to appropriately represent the industrial sector outlook for each of the scenarios and case studies evaluated. An economic input-output (I/O) model of the US economy is constructed to link up with MARKAL. The I/O model enables user to change input requirements (e.g. energy intensity) for different sectors or the share of consumer income expended on a given good. This gives end-users a mechanism for modeling change in the two dimensions of technological progress and consumer preferences that define the future scenarios. The framework will then be extended to include environmental I/O framework to track life cycle emissions associated
Canella, Daniela Silva; Silva, Ana Carolina Feldenheimer da; Jaime, Patrícia Constante
2013-02-01
Nutrition campaigns in Primary Health Care (PHC) play an important role in health promotion and the prevention and treatment of injuries. The scope of this paper is to chart and evaluate the scientific output of nutrition in Brazilian PHC. A search and review of the literature and papers was conducted on the PubMed and Lilacs databases, using key words related to PHC and nutrition. The studies were restricted to Brazil with the professionals or population assisted by PHC in the Brazilian Unified Health System and published prior to March 2011. The references in the selected articles were also consulted in order to identify additional studies. From the total of papers located, 68 were eligible and a further 49 were identified in the references lists, such that a total of 117 papers were analyzed. The studies reviewed were mostly original articles, using quantitative methodology, carried out by São Paulo University in that state and published from 2002 to 2011. The main issues were diagnosis seeking the evaluation of nutritional status involving children. The output in this field is growing, although there is a need to redirect the scope of future studies to a focus on intervention models and program evaluation.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
Regional Air Quality forecAST (RAQAST) Over the U.S
NASA Astrophysics Data System (ADS)
Yoshida, Y.; Choi, Y.; Zeng, T.; Wang, Y.
2005-12-01
A regional chemistry and transport modeling system is used to provide 48-hour forecast of the concentrations of ozone and its precursors over the United States. Meteorological forecast is conducted using the NCAR/Penn State MM5 model. The regional chemistry and transport model simulates the sources, transport, chemistry, and deposition of 24 chemical tracers. The lateral and upper boundary conditions of trace gas concentrations are specified using the monthly mean output from the global GEOS-CHEM model. The initial and boundary conditions for meteorological fields are taken from the NOAA AVN forecast. The forecast has been operational since August, 2003. Model simulations are evaluated using surface, aircraft, and satellite measurements in the A'hindcast' mode. The next step is an automated forecast evaluation system.
Sequentially Executed Model Evaluation Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less
Shi, X. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Thornton, P. E. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Ricciuto, D. M. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Mao, J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Sebestyen, S. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Griffiths, N. A. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Bisht, G. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.
2016-09-01
Here we provide model code, inputs, outputs and evaluation datasets for a new configuration of the Community Land Model (CLM) for SPRUCE, which includes a fully prognostic water table calculation for SPRUCE. Our structural and process changes to CLM focus on modifications needed to represent the hydrologic cycle of bogs environment with perched water tables, as well as distinct hydrologic dynamics and vegetation communities of the raised hummock and sunken hollow microtopography characteristic of SPRUCE and other peatland bogs. The modified model was parameterized and independently evaluated against observations from an ombrotrophic raised-dome bog in northern Minnesota (S1-Bog), the site for the Spruce and Peatland Responses Under Climatic and Environmental Change experiment (SPRUCE).
Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.
Nine Criteria for a Measure of Scientific Output
Kreiman, Gabriel; Maunsell, John H. R.
2011-01-01
Scientific research produces new knowledge, technologies, and clinical treatments that can lead to enormous returns. Often, the path from basic research to new paradigms and direct impact on society takes time. Precise quantification of scientific output in the short-term is not an easy task but is critical for evaluating scientists, laboratories, departments, and institutions. While there have been attempts to quantifying scientific output, we argue that current methods are not ideal and suffer from solvable difficulties. Here we propose criteria that a metric should have to be considered a good index of scientific output. Specifically, we argue that such an index should be quantitative, based on robust data, rapidly updated and retrospective, presented with confidence intervals, normalized by number of contributors, career stage and discipline, impractical to manipulate, and focused on quality over quantity. Such an index should be validated through empirical testing. The purpose of quantitatively evaluating scientific output is not to replace careful, rigorous review by experts but rather to complement those efforts. Because it has the potential to greatly influence the efficiency of scientific research, we have a duty to reflect upon and implement novel and rigorous ways of evaluating scientific output. The criteria proposed here provide initial steps toward the systematic development and validation of a metric to evaluate scientific output. PMID:22102840
Section 3. The SPARROW Surface Water-Quality Model: Theory, Application and User Documentation
Schwarz, G.E.; Hoos, A.B.; Alexander, R.B.; Smith, R.A.
2006-01-01
SPARROW (SPAtially Referenced Regressions On Watershed attributes) is a watershed modeling technique for relating water-quality measurements made at a network of monitoring stations to attributes of the watersheds containing the stations. The core of the model consists of a nonlinear regression equation describing the non-conservative transport of contaminants from point and diffuse sources on land to rivers and through the stream and river network. The model predicts contaminant flux, concentration, and yield in streams and has been used to evaluate alternative hypotheses about the important contaminant sources and watershed properties that control transport over large spatial scales. This report provides documentation for the SPARROW modeling technique and computer software to guide users in constructing and applying basic SPARROW models. The documentation gives details of the SPARROW software, including the input data and installation requirements, and guidance in the specification, calibration, and application of basic SPARROW models, as well as descriptions of the model output and its interpretation. The documentation is intended for both researchers and water-resource managers with interest in using the results of existing models and developing and applying new SPARROW models. The documentation of the model is presented in two parts. Part 1 provides a theoretical and practical introduction to SPARROW modeling techniques, which includes a discussion of the objectives, conceptual attributes, and model infrastructure of SPARROW. Part 1 also includes background on the commonly used model specifications and the methods for estimating and evaluating parameters, evaluating model fit, and generating water-quality predictions and measures of uncertainty. Part 2 provides a user's guide to SPARROW, which includes a discussion of the software architecture and details of the model input requirements and output files, graphs, and maps. The text documentation and computer software are available on the Web at http://usgs.er.gov/sparrow/sparrow-mod/.
Fienen, Michael N.; Nolan, Bernard T.; Feinstein, Daniel T.
2016-01-01
For decision support, the insights and predictive power of numerical process models can be hampered by insufficient expertise and computational resources required to evaluate system response to new stresses. An alternative is to emulate the process model with a statistical “metamodel.” Built on a dataset of collocated numerical model input and output, a groundwater flow model was emulated using a Bayesian Network, an Artificial neural network, and a Gradient Boosted Regression Tree. The response of interest was surface water depletion expressed as the source of water-to-wells. The results have application for managing allocation of groundwater. Each technique was tuned using cross validation and further evaluated using a held-out dataset. A numerical MODFLOW-USG model of the Lake Michigan Basin, USA, was used for the evaluation. The performance and interpretability of each technique was compared pointing to advantages of each technique. The metamodel can extend to unmodeled areas.
Scenarios and performance measures for advanced ISDN satellite design and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1991-01-01
Described here are the contemplated input and expected output for the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and Full Service ISDN Satellite (FSIS) Models. The discrete event simulations of these models are presented with specific scenarios that stress ISDN satellite parameters. Performance measure criteria are presented for evaluating the advanced ISDN communication satellite designs of the NASA Satellite Communications Research (SCAR) Program.
Vadiati, M; Asghari-Moghaddam, A; Nakhaei, M; Adamowski, J; Akbarzadeh, A H
2016-12-15
Due to inherent uncertainties in measurement and analysis, groundwater quality assessment is a difficult task. Artificial intelligence techniques, specifically fuzzy inference systems, have proven useful in evaluating groundwater quality in uncertain and complex hydrogeological systems. In the present study, a Mamdani fuzzy-logic-based decision-making approach was developed to assess groundwater quality based on relevant indices. In an effort to develop a set of new hybrid fuzzy indices for groundwater quality assessment, a Mamdani fuzzy inference model was developed with widely-accepted groundwater quality indices: the Groundwater Quality Index (GQI), the Water Quality Index (WQI), and the Ground Water Quality Index (GWQI). In an effort to present generalized hybrid fuzzy indices a significant effort was made to employ well-known groundwater quality index acceptability ranges as fuzzy model output ranges rather than employing expert knowledge in the fuzzification of output parameters. The proposed approach was evaluated for its ability to assess the drinking water quality of 49 samples collected seasonally from groundwater resources in Iran's Sarab Plain during 2013-2014. Input membership functions were defined as "desirable", "acceptable" and "unacceptable" based on expert knowledge and the standard and permissible limits prescribed by the World Health Organization. Output data were categorized into multiple categories based on the GQI (5 categories), WQI (5 categories), and GWQI (3 categories). Given the potential of fuzzy models to minimize uncertainties, hybrid fuzzy-based indices produce significantly more accurate assessments of groundwater quality than traditional indices. The developed models' accuracy was assessed and a comparison of the performance indices demonstrated the Fuzzy Groundwater Quality Index model to be more accurate than both the Fuzzy Water Quality Index and Fuzzy Ground Water Quality Index models. This suggests that the new hybrid fuzzy indices developed in this research are reliable and flexible when used in groundwater quality assessment for drinking purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cardiac surgery productivity and throughput improvements.
Lehtonen, Juha-Matti; Kujala, Jaakko; Kouri, Juhani; Hippeläinen, Mikko
2007-01-01
The high variability in cardiac surgery length--is one of the main challenges for staff managing productivity. This study aims to evaluate the impact of six interventions on open-heart surgery operating theatre productivity. A discrete operating theatre event simulation model with empirical operation time input data from 2603 patients is used to evaluate the effect that these process interventions have on the surgery output and overtime work. A linear regression model was used to get operation time forecasts for surgery scheduling while it also could be used to explain operation time. A forecasting model based on the linear regression of variables available before the surgery explains 46 per cent operating time variance. The main factors influencing operation length were type of operation, redoing the operation and the head surgeon. Reduction of changeover time between surgeries by inducing anaesthesia outside an operating theatre and by reducing slack time at the end of day after a second surgery have the strongest effects on surgery output and productivity. A more accurate operation time forecast did not have any effect on output, although improved operation time forecast did decrease overtime work. A reduction in the operation time itself is not studied in this article. However, the forecasting model can also be applied to discover which factors are most significant in explaining variation in the length of open-heart surgery. The challenge in scheduling two open-heart surgeries in one day can be partly resolved by increasing the length of the day, decreasing the time between two surgeries or by improving patient scheduling procedures so that two short surgeries can be paired. A linear regression model is created in the paper to increase the accuracy of operation time forecasting and to identify factors that have the most influence on operation time. A simulation model is used to analyse the impact of improved surgical length forecasting and five selected process interventions on productivity in cardiac surgery.
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Automated turbulence forecasts for aviation hazards
NASA Astrophysics Data System (ADS)
Sharman, R.; Frehlich, R.; Vandenberghe, F.
2010-09-01
An operational turbulence forecast system for commercial and aviation use is described that is based on an ensemble of turbulence diagnostics derived from standard NWP model outputs. In the U. S. this forecast product is named GTG (Graphical Turbulence Guidance) and has been described in detail in Sharman et al., WAF 2006. Since turbulence has many sources in the atmosphere, the ensemble approach of combining diagnostics has been shown to provide greater statistical accuracy than the use of a single diagnostic, or of a subgrid tke parameterization. GTG is sponsored by the FAA, and has undergone rigorous accuracy, safety, and usability evaluations. The GTG product is now hosted on NOAA's Aviation Data Service (ADDS), web site (http://aviationweather.gov/), for access by pilots, air traffic controllers, and dispatchers. During this talk the various turbulence diagnostics, their statistical properties, and their relative performance (based on comparisons to observations) will be presented. Importantly, the model output is ɛ1/3 (where ɛ is the eddy dissipation rate), so is aircraft independent. The diagnostics are individually and collectively calibrated so that their PDFs satisfy the expected log normal distribution of ɛ^1/3. Some of the diagnostics try to take into account the role of gravity waves and inertia-gravity waves in the turbulence generation process. Although the current GTG product is based on the RUC forecast model running over the CONUS, it is transitioning to a WRF based product, and in fact WRF-based versions are currently running operationally over Taiwan and has also been implemented for use by the French Navy in climatological studies. Yet another version has been developed which uses GFS model output to provide global turbulence forecasts. Thus the forecast product is available as a postprocessing program for WRF or other model output and provides 3D maps of turbulence likelihood of any region where NWP model data is available. Although the current GTG has been used mainly for large commercial aircraft, since the output is aircraft independent it could readily be scaled to smaller aircraft such as UAVs. Further, the ensemble technique allows the diagnostics to be used to form probabilistic forecasts, in a manner similar to ensemble NWP forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.
Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less
NASA Astrophysics Data System (ADS)
Narita, Fumio; Fox, Marina; Mori, Kotaro; Takeuchi, Hiroki; Kobayashi, Takuya; Omote, Kenji
2017-11-01
This paper studies the energy harvesting characteristics of piezoelectric laminates consisting of barium titanate (BaTiO3) and copper (Cu) from room temperature to cryogenic/high temperatures both experimentally and numerically. First, the output voltages of the piezoelectric BaTiO3/Cu laminates were measured from room temperature to a cryogenic temperature (77 K). The output power was evaluated for various values of load resistance. The results showed that the maximum output power density is approximately 2240 nW cm-3. The output voltages of the BaTiO3/Cu laminates were also measured from room temperature to a higher temperature (333 K). To discuss the output voltages of the BaTiO3/Cu laminates due to temperature changes, phase field and finite element simulations were combined. A phase field model for grain growth was used to generate grain structures. The phase field model was then employed for BaTiO3 polycrystals, coupled with the time-dependent Ginzburg-Landau theory and the oxygen vacancies diffusion, to calculate the temperature-dependent piezoelectric coefficient and permittivity. Using these properties, the output voltages of the BaTiO3/Cu laminates from room temperature to both 77 K and 333 K were analyzed by three dimensional finite element methods, and the results are presented for several grain sizes and oxygen vacancy densities. It was found that electricity in the BaTiO3 ceramic layer is generated not only through the piezoelectric effect caused by a thermally induced bending stress but also by the temperature dependence of the BaTiO3 piezoelectric coefficient and permittivity.
NASA Astrophysics Data System (ADS)
Zema, Demetrio Antonio; Nicotra, Angelo; Tamburino, Vincenzo; Marcello Zimbone, Santo
2017-04-01
The availability of geodetic heads and considerable water flows in collective irrigation networks suggests the possibility of recovery potential energy using small hydro power plants (SHPP) at sustainable costs. This is the case of many Water Users Associations (WUA) in Calabria (Southern Italy), where it could theoretically be possible to recovery electrical energy out of the irrigation season. However, very few Calabrian WUAs have currently built SHPP in their irrigation networks and thus in this region the potential energy is practically fully lost. A previous study (Zema et al., 2016) proposed an original and simple model to site turbines and size their power output as well as to evaluate profits of SHPP in collective irrigation networks. Applying this model at regional scale, this paper estimates the theoretical energy production and the economic performances of SHPP installed in collective irrigation networks of Calabrian WUAs. In more detail, based on digital terrain models processed by GIS and few parameters of the water networks, for each SHPP the model provides: (i) the electrical power output; (iii) the optimal water discharge; (ii) costs, revenues and profits. Moreover, the map of the theoretical energy production by SHPP in collective irrigation networks of Calabria was drawn. The total network length of the 103 water networks surveyed is equal to 414 km and the total geodetic head is 3157 m, of which 63% is lost due to hydraulic losses. Thus, a total power output of 19.4 MW could theoretically be installed. This would provide an annual energy production of 103 GWh, considering SHPPs in operation only out of the irrigation season. The single irrigation networks have a power output in the range 0.7 kW - 6.4 MW. However, the lowest SHPPs (that is, turbines with power output under 5 kW) have been neglected, because the annual profit is very low (on average less than 6%, Zema et al., 2016). On average each irrigation network provides an annual revenue from electrical energy sale of about 103000 €. Even though this revenue may appear quite low, it represents an important share of the annual WUA income. For the entire region, the total revenues from recovery energy in collective irrigation networks through SHPPs have been estimated in about 12 million Euros; investment and operating costs have been evaluated by parametric equations and the total profit theoretically available for each WUA has been quantified. This study has shown the economic opportunity of integrating SHPP in existing collecting irrigation networks of WUAs, in view of providing additional economic resources for users and enhancing the exploitation of a renewable energy source. REFERENCE Zema D.A., Nicotra A., Tamburino V., Zimbone S.M. 2016. A simple method to evaluate the technical and economic feasibility of micro hydro power plants in existing irrigation systems. Renewable Energy 85, 498-506. DOI: 10.1016/j.renene.2015.06.066.
Plueschke, Kelly; McGettigan, Patricia; Pacurariu, Alexandra; Kurz, Xavier; Cave, Alison
2018-06-14
A review of European Union (EU)-funded initiatives linked to 'Real World Evidence' (RWE) was performed to determine whether their outputs could be used for the generation of real-world data able to support the European Medicines Agency (EMA)'s regulatory decision-making on medicines. The initiatives were identified from publicly available websites. Their topics were categorised into five areas: 'Data source', 'Methodology', 'Governance model', 'Analytical model' and 'Infrastructure'. To assess their immediate relevance for medicines evaluation, their therapeutic areas were compared with the products recommended for EU approval in 2016 and those included in the EMA pharmaceutical business pipeline. Of 171 originally identified EU-funded initiatives, 65 were selected based on their primary and secondary objectives (35 'Data source' initiatives, 15 'Methodology', 10 'Governance model', 17 'Analytical model' and 25 'Infrastructure'). These 65 initiatives received over 734 million Euros of public funding. At the time of evaluation, the published outputs of the 40 completed initiatives did not always match their original objectives. Overall, public information was limited, data access was not explicit and their sustainability was unclear. The topics matched 8 of 14 therapeutic areas of the products recommended for approval in 2016 and 8 of 15 therapeutic areas in the 2017-2019 pharmaceutical business pipeline. Haematology, gastroenterology or cardiovascular systems were poorly represented. This landscape of EU-funded initiatives linked to RWE which started before 31 December 2016 highlighted that the immediate utilisation of their outputs to support regulatory decision-making is limited, often due to insufficient available information and to discrepancies between outputs and objectives. Furthermore, the restricted sustainability of the initiatives impacts on their downstream utility. Multiple projects focussing on the same therapeutic areas increase the likelihood of duplication of both efforts and resources. These issues contribute to gaps in generating RWE for medicines and diminish returns on the public funds invested. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Grosvenor, D. P.; Wood, R.
2012-12-01
As part of one of the Climate Process Teams (CPTs) we have been testing the implementation of a new cloud parameterization into the CAM5 and AM3 GCMs. The CLUBB parameterization replaces all but the deep convection cloud scheme and uses an innovative PDF based approach to diagnose cloud water content and turbulence. We have evaluated the base models and the CLUBB parameterization in the SE Pacific stratocumulus region using a suite of satellite observation metrics including: Liquid Water Path (LWP) measurements from AMSRE; cloud fractions from CloudSat/CALIPSO; droplet concentrations (Nd) and Cloud Top Temperatures from MODIS; CloudSat precipitation; and relationships between Estimated Inversion Strength (calculated from AMSRE SSTs, Cloud Top Temperatures from MODIS and ECMWF re-analysis fields) and cloud fraction. This region has the advantage of an abundance of in-situ aircraft observations taken during the VOCALS campaign, which is facilitating the diagnosis of the model problems highlighted by the model evaluation. This data has also been recently used to demonstrate the reliability of MODIS Nd estimates. The satellite data needs to be filtered to ensure accurate retrievals and we have been careful to apply the same screenings to the model fields. For example, scenes with high cloud fractions and with output times near to the satellite overpass times can be extracted from the model for a fair comparison with MODIS Nd estimates. To facilitate this we have been supplied with instantaneous model output since screening would not be possible based on time averaged data. We also have COSP satellite simulator output, which allows a fairer comparison between satellite and model. For example, COSP cloud fraction is based upon the detection threshold of the satellite instrument in question. These COSP fields are also used for the model output filtering just described. The results have revealed problems with both the base models and the versions with the CLUBB parameterization. The CAM5 model produces realistic near-coast cloud cover, but too little further west in the stratocumulus to cumulus regions. The implementation of CLUBB has vastly improved this situation with cloud cover that is very similar to that observed. CLUBB also improves the Nd field in CAM5 by producing realistic near-coast increases and by removing high Nd values associated with the detrainment of droplets by cumulus clouds. AM3 has a lack of stratocumulus cloud near the South American coast and has much lower droplet concentrations than observed. VOCALS measurements showed that sulfate mass loadings were generally too high in both base models, whereas CCN concentrations were too low. This suggests a problem with the mass distribution partitioning of sulfate that is being investigated. Diurnal and seasonal comparisons have been very illuminating. CLUBB produces very little diurnal variation in LWP, but large variations in precipitation rates. This is likely to point to problems that are now being addressed by the modeling part of the CPT team, creating an iterative workflow process between the model developers and the model testers, which should facilitate efficient parameterization improvement. We will report on the latest developments of this process.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
NASA Technical Reports Server (NTRS)
Elshorbany, Yasin F.; Duncan, Bryan N.; Strode, Sarah A.; Wang, James S.; Kouatchou, Jules
2015-01-01
We present the Efficient CH4-CO-OH Module (ECCOH) that allows for the simulation of the methane, carbon monoxide and hydroxyl radical (CH4-CO-OH cycle, within a chemistry climate model, carbon cycle model, or earth system model. The computational efficiency of the module allows many multi-decadal, sensitivity simulations of the CH4-CO-OH cycle, which primarily determines the global tropospheric oxidizing capacity. This capability is important for capturing the nonlinear feedbacks of the CH4-CO-OH system and understanding the perturbations to relatively long-lived methane and the concomitant impacts on climate. We implemented the ECCOH module into the NASA GEOS-5 Atmospheric Global Circulation Model (AGCM), performed multiple sensitivity simulations of the CH4-CO-OH system over two decades, and evaluated the model output with surface and satellite datasets of methane and CO. The favorable comparison of output from the ECCOH module (as configured in the GEOS-5 AGCM) with observations demonstrates the fidelity of the module for use in scientific research.
NASA Technical Reports Server (NTRS)
Elshorbany, Yasin F.; Duncan, Bryan N.; Strode, Sarah A.; Wang, James S.; Kouatchou, Jules
2016-01-01
We present the Efficient CH4-CO-OH (ECCOH) chemistry module that allows for the simulation of the methane, carbon monoxide, and hydroxyl radical (CH4-CO- OH) system, within a chemistry climate model, carbon cycle model, or Earth system model. The computational efficiency of the module allows many multi-decadal sensitivity simulations of the CH4-CO-OH system, which primarily determines the global atmospheric oxidizing capacity. This capability is important for capturing the nonlinear feedbacks of the CH4-CO-OH system and understanding the perturbations to methane, CO, and OH, and the concomitant impacts on climate. We implemented the ECCOH chemistry module in the NASA GEOS-5 atmospheric global circulation model (AGCM), performed multiple sensitivity simulations of the CH4-CO-OH system over 2 decades, and evaluated the model output with surface and satellite data sets of methane and CO. The favorable comparison of output from the ECCOH chemistry module (as configured in the GEOS- 5 AGCM) with observations demonstrates the fidelity of the module for use in scientific research.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.
NASA AVOSS Fast-Time Models for Aircraft Wake Prediction: User's Guide (APA3.8 and TDP2.1)
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew J.; Limon Duparcmeur, Fanny M.
2016-01-01
NASA's current distribution of fast-time wake vortex decay and transport models includes APA (Version 3.8) and TDP (Version 2.1). This User's Guide provides detailed information on the model inputs, file formats, and model outputs. A brief description of the Memphis 1995, Dallas/Fort Worth 1997, and the Denver 2003 wake vortex datasets is given along with the evaluation of models. A detailed bibliography is provided which includes publications on model development, wake field experiment descriptions, and applications of the fast-time wake vortex models.
Electron trajectory evaluation in laser-plasma interaction for effective output beam
NASA Astrophysics Data System (ADS)
Zobdeh, P.; Sadighi-Bonabi, R.; Afarideh, H.
2010-06-01
Using the ellipsoidal cavity model, the quasi-monoenergetic electron output beam in laser-plasma interaction is described. By the cavity regime the quality of electron beam is improved in comparison with those generated from other methods such as periodic plasma wave field, spheroidal cavity regime and plasma channel guided acceleration. Trajectory of electron motion is described as hyperbolic, parabolic or elliptic paths. We find that the self-generated electron bunch has a smaller energy width and more effective gain in energy spectrum. Initial condition for the ellipsoidal cavity is determined by laser-plasma parameters. The electron trajectory is influenced by its position, energy and cavity electrostatic potential.
Evaluating the effectiveness of intercultural teachers.
Cox, Kathleen
2011-01-01
With globalization and major immigration flows, intercultural teaching encounters are likely to increase, along with the need to assure intercultural teaching effectiveness.Thus, the purpose of this article is to present a conceptual framework for nurse educators to consider when anticipating an intercultural teaching experience. Kirkpatrick's and Bushnell's models provide a basis for the conceptual framework. Major concepts of the model include input, process, output, and outcome.The model may possibly be used to guide future research to determine which variables are most influential in explaining intercultural teaching effectiveness.
2014-04-11
particle location files for each source (hours) dti : time step in seconds horzmix: CONSTANT = use the value of horcon...however, if leg lengths are short. Extreme values of D/Lo can occur. We will handle these by assigning a maximum to the output. This is discussed by
Advanced Actuation Systems Development. Volume 2
1989-08-01
and unloaded performance characteristics of a test specimen produced by General Dynamics Corporation as a feasibility model. The actuation system for...changing the camber of the test specimen is unique and was evaluated with a series of input/output measurements. The testing verified the general ...MAWS General ’rest Procedure........................................6 General Performance Measurements .................................... 10 Test
Evaluation of In-Structure Shock Prediction Techniques for Buried Structures
1991-10-01
process of modeling this problem necessitated the inclus on of structure- 16 media interaction ( SMk ) for the development of loeds for the structural...shears, moments, and strains are also output. 5.2.1 Free-Field Load Generation The equations used in ISSV3 to characterize the free-field environment are
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
NASA Astrophysics Data System (ADS)
Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin
2016-03-01
Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.
Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.
Singh, Sanjeet
2016-08-01
Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance women participation by 133%. In addition, the states are also ranked using the cross efficiency approach and results are analyzed. State of Tamil Nadu occupies the top position followed by Puducherry, Punjab, and Rajasthan in the ranking list. To the best of my knowledge, this is the first pan-India level study to evaluate and rank the performance of MGNREGA scheme quantitatively and so comprehensively. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, S; Ahmad, S; Chen, Y
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less
Use of medium-range numerical weather prediction model output to produce forecasts of streamflow
Clark, M.P.; Hay, L.E.
2004-01-01
This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases he accuracy of precipitation forecasts over the northeastern United States, but overall, the accuracy of MOS-based precipitation forecasts is slightly lower than the raw NCEP forecasts. Four basins in the United States were chosen as case studies to evaluate the value of MRF output for predictions of streamflow. Streamflow forecasts using MRF output were generated for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado: East Fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). Hydrologic model output forced with measured-station data were used as "truth" to focus attention on the hydrologic effects of errors in the MRF forecasts. Eight-day streamflow forecasts produced using the MOS-corrected MRF output as input (MOS) were compared with those produced using the climatic Ensemble Streamflow Prediction (ESP) technique. MOS-based streamflow forecasts showed increased skill in the snowmelt-dominated river basins, where daily variations in streamflow are strongly forced by temperature. In contrast, the skill of MOS forecasts in the rainfall-dominated basin (the Alapaha River) were equivalent to the skill of the ESP forecasts. Further improvements in streamflow forecasts require more accurate local-scale forecasts of precipitation and temperature, more accurate specification of basin initial conditions, and more accurate model simulations of streamflow. ?? 2004 American Meteorological Society.
Evaluating vaccination strategies to control foot-and-mouth disease: a model comparison study.
Roche, S E; Garner, M G; Sanson, R L; Cook, C; Birch, C; Backer, J A; Dube, C; Patyk, K A; Stevenson, M A; Yu, Z D; Rawdon, T G; Gauntlett, F
2015-04-01
Simulation models can offer valuable insights into the effectiveness of different control strategies and act as important decision support tools when comparing and evaluating outbreak scenarios and control strategies. An international modelling study was performed to compare a range of vaccination strategies in the control of foot-and-mouth disease (FMD). Modelling groups from five countries (Australia, New Zealand, USA, UK, The Netherlands) participated in the study. Vaccination is increasingly being recognized as a potentially important tool in the control of FMD, although there is considerable uncertainty as to how and when it should be used. We sought to compare model outputs and assess the effectiveness of different vaccination strategies in the control of FMD. Using a standardized outbreak scenario based on data from an FMD exercise in the UK in 2010, the study showed general agreement between respective models in terms of the effectiveness of vaccination. Under the scenario assumptions, all models demonstrated that vaccination with 'stamping-out' of infected premises led to a significant reduction in predicted epidemic size and duration compared to the 'stamping-out' strategy alone. For all models there were advantages in vaccinating cattle-only rather than all species, using 3-km vaccination rings immediately around infected premises, and starting vaccination earlier in the control programme. This study has shown that certain vaccination strategies are robust even to substantial differences in model configurations. This result should increase end-user confidence in conclusions drawn from model outputs. These results can be used to support and develop effective policies for FMD control.
Wang, Lifang
2017-01-01
University scientific research ability is an important indicator to express the strength of universities. In this paper, the evaluation of university scientific research ability is investigated based on the output of sci-tech papers. Four university alliances from North America, UK, Australia, and China, are selected as the case study of the university scientific research evaluation. Data coming from Thomson Reuters InCites are collected to support the evaluation. The work has contributed new framework to the issue of university scientific research ability evaluation. At first, we have established a hierarchical structure to show the factors that impact the evaluation of university scientific research ability. Then, a new MCDM method called D-AHP model is used to implement the evaluation and ranking of different university alliances, in which a data-driven approach is proposed to automatically generate the D numbers preference relations. Next, a sensitivity analysis has been given to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China’s scientific research ability. PMID:28212446
H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua
2014-10-01
This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less
Performance and Feasibility Analysis of a Wind Turbine Power System for Use on Mars
NASA Technical Reports Server (NTRS)
Lichter, Matthew D.; Viterna, Larry
1999-01-01
A wind turbine power system for future missions to the Martian surface was studied for performance and feasibility. A C++ program was developed from existing FORTRAN code to analyze the power capabilities of wind turbines under different environments and design philosophies. Power output, efficiency, torque, thrust, and other performance criteria could be computed given design geometries, atmospheric conditions, and airfoil behavior. After reviewing performance of such a wind turbine, a conceptual system design was modeled to evaluate feasibility. More analysis code was developed to study and optimize the overall structural design. Findings of this preliminary study show that turbine power output on Mars could be as high as several hundred kilowatts. The optimized conceptual design examined here would have a power output of 104 kW, total mass of 1910 kg, and specific power of 54.6 W/kg.
Pandemic recovery analysis using the dynamic inoperability input-output model.
Santos, Joost R; Orsi, Mark J; Bond, Erik J
2009-12-01
Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.
Modelled vs. reconstructed past fire dynamics - how can we compare?
NASA Astrophysics Data System (ADS)
Brücher, Tim; Brovkin, Victor; Kloster, Silvia; Marlon, Jennifer R.; Power, Mitch J.
2015-04-01
Fire is an important process that affects climate through changes in CO2 emissions, albedo, and aerosols (Ward et al. 2012). Fire-history reconstructions from charcoal accumulations in sediment indicate that biomass burning has increased since the Last Glacial Maximum (Power et al. 2008; Marlon et al. 2013). Recent comparisons with transient climate model output suggest that this increase in global ?re activity is linked primarily to variations in temperature and secondarily to variations in precipitation (Daniau et al. 2012). In this study, we discuss the best way to compare global ?re model output with charcoal records. Fire models generate quantitative output for burned area and fire-related emissions of CO2, whereas charcoal data indicate relative changes in biomass burning for specific regions and time periods only. However, models can be used to relate trends in charcoal data to trends in quantitative changes in burned area or fire carbon emissions. Charcoal records are often reported as Z-scores (Power et al. 2008). Since Z-scores are non-linear power transformations of charcoal influxes, we must evaluate if, for example, a two-fold increase in the standardized charcoal reconstruction corresponds to a 2- or 200-fold increase in the area burned. In our study we apply the Z-score metric to the model output. This allows us to test how well the model can quantitatively reproduce the charcoal-based reconstructions and how Z-score metrics affect the statistics of model output. The Global Charcoal Database (GCD version 2.5; www.gpwg.org/gpwgdb.html) is used to determine regional and global paleofire trends from 218 sedimentary charcoal records covering part or all of the last 8 ka BP. To retrieve regional and global composites of changes in fire activity over the Holocene the time series of Z-scores are linearly averaged to achieve regional composites. A coupled climate-carbon cycle model, CLIMBA (Brücher et al. 2014), is used for this study. It consists of the CLIMBER-2 Earth system model of intermediate complexity and the JSBACH land component of the Max Planck Institute Earth System Model. The fire algorithm in JSBACH assumes a constant annual lightning cycle as the sole fire ignition mechanism (Arora and Boer 2005). To eliminate data processing differences as a source for potential discrepancies, the processing of both reconstructed and modeled data, including e.g. normalisation with respect to a given base period and aggregation of time series was done in exactly the same way. Here, we compare the aggregated time series on a hemispheric and regional scale.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition
2006-04-05
classes. In this study we evaluate the effectiveness of one of these formulations that was developed by Crammer and Singer [9], which leads to...significantly more complex model can be learned by directly applying the Crammer -Singer multiclass formulation on the outputs of the binary classifiers...will refer to this as the Crammer -Singer (CS) model. Comparing the scaling approach to the Crammer -Singer approach we can see that the Crammer -Singer
Modeling Subsurface Hydrology in Floodplains
NASA Astrophysics Data System (ADS)
Evans, Cristina M.; Dritschel, David G.; Singer, Michael B.
2018-03-01
Soil-moisture patterns in floodplains are highly dynamic, owing to the complex relationships between soil properties, climatic conditions at the surface, and the position of the water table. Given this complexity, along with climate change scenarios in many regions, there is a need for a model to investigate the implications of different conditions on water availability to riparian vegetation. We present a model, HaughFlow, which is able to predict coupled water movement in the vadose and phreatic zones of hydraulically connected floodplains. Model output was calibrated and evaluated at six sites in Australia to identify key patterns in subsurface hydrology. This study identifies the importance of the capillary fringe in vadose zone hydrology due to its water storage capacity and creation of conductive pathways. Following peaks in water table elevation, water can be stored in the capillary fringe for up to months (depending on the soil properties). This water can provide a critical resource for vegetation that is unable to access the water table. When water table peaks coincide with heavy rainfall events, the capillary fringe can support saturation of the entire soil profile. HaughFlow is used to investigate the water availability to riparian vegetation, producing daily output of water content in the soil over decadal time periods within different depth ranges. These outputs can be summarized to support scientific investigations of plant-water relations, as well as in management applications.
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent
Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Automated array assembly task, phase 1
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1977-01-01
State-of-the-art technologies applicable to silicon solar cell and solar cell module fabrication were assessed. The assessment consisted of a technical feasibility evaluation and a cost projection for high volume production of solar cell modules. Design equations based on minimum power loss were used as a tool in the evaluation of metallization technologies. A solar cell process sensitivity study using models, computer calculations, and experimental data was used to identify process step variation and cell output variation correlations.
Evaluation of a Kinematically-Driven Finite Element Footstrike Model.
Hannah, Iain; Harland, Andy; Price, Dan; Schlarb, Heiko; Lucas, Tim
2016-06-01
A dynamic finite element model of a shod running footstrike was developed and driven with 6 degree of freedom foot segment kinematics determined from a motion capture running trial. Quadratic tetrahedral elements were used to mesh the footwear components with material models determined from appropriate mechanical tests. Model outputs were compared with experimental high-speed video (HSV) footage, vertical ground reaction force (GRF), and center of pressure (COP) excursion to determine whether such an approach is appropriate for the development of athletic footwear. Although unquantified, good visual agreement to the HSV footage was observed but significant discrepancies were found between the model and experimental GRF and COP readings (9% and 61% of model readings outside of the mean experimental reading ± 2 standard deviations, respectively). Model output was also found to be highly sensitive to input kinematics with a 120% increase in maximum GRF observed when translating the force platform 2 mm vertically. While representing an alternative approach to existing dynamic finite element footstrike models, loading highly representative of an experimental trial was not found to be achievable when employing exclusively kinematic boundary conditions. This significantly limits the usefulness of employing such an approach in the footwear development process.
Underwater striling engine design with modified one-dimensional model
NASA Astrophysics Data System (ADS)
Li, Daijin; Qin, Kan; Luo, Kai
2015-09-01
Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.
Comparing estimates of EMEP MSC-W and UFORE models in air pollutant reduction by urban trees.
Guidolotti, Gabriele; Salviato, Michele; Calfapietra, Carlo
2016-10-01
There is a growing interest to identify and quantify the benefits provided by the presence of trees in urban environment in order to improve the environmental quality in cities. However, the evaluation and estimate of plant efficiency in removing atmospheric pollutants is rather complicated, because of the high number of factors involved and the difficulty of estimating the effect of the interactions between the different components. In this study, the EMEP MSC-W model was implemented to scale-down to tree-level and allows its application to an industrial-urban green area in Northern Italy. Moreover, the annual outputs were compared with the outputs of UFORE (nowadays i-Tree), a leading model for urban forest applications. Although, EMEP/MSC-W model and UFORE are semi-empirical models designed for different applications, the comparison, based on O3, NO2 and PM10 removal, showed a good agreement in the estimates and highlights how the down-scaling methodology presented in this study may have significant opportunities for further developments.
Analgesic Efficacy and Safety of Hydromorphone in Chinchillas (Chinchilla lanigera).
Evenson, Emily A; Mans, Christoph
2018-05-01
Limited information is available regarding the efficacy of opioid analgesics in chinchillas. Here we sought to evaluate the analgesic efficacy and safety of hydromorphone in chinchillas. In a randomized, controlled, blind, complete crossover design, hydromorphone was administered at 0.5, 1, and 2 mg/kg SC to 16 chinchillas. Analgesic efficacy was determined by measuring hindlimb withdrawal latencies after a thermal noxious stimulus (Hargreaves method) at 0, 1, 2, 4, and 8 h after drug administration. Changes in daily food intake and fecal output after hydromorphone administration were recorded. At 2 mg/kg SC, but not at lower dosages, hydromorphone increased withdrawal latencies for less than 4 h. Food intake was reduced after all 3 dosages, and fecal output decreased in the 1- and 2-mg/kg groups. The decreases in these parameters were dose-dependent, with the greatest reduction measured over the first 24 h. Our current results indicate that hydromorphone at 2 mg/kg SC is an effective, short-acting analgesic drug in chinchillas that transiently reduces food intake and fecal output. Further studies are needed to evaluate the safety of hydromorphone in animals undergoing surgical procedures and general anesthesia and to determine whether lower doses provide analgesia in different nociceptive models.
Climate Model Diagnostic Analyzer Web Service System
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.
2014-12-01
We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same through web-browser interfaces. The summer school will serve as a valuable testbed for the tool development, preparing CMDA to serve its target community: Earth-science modeling and model-analysis community.
A merged model of quality improvement and evaluation: maximizing return on investment.
Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C
2013-11-01
Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.
Wang, Fugui; Mladenoff, David J; Forrester, Jodi A; Blanco, Juan A; Schelle, Robert M; Peckham, Scott D; Keough, Cindy; Lucash, Melissa S; Gower, Stith T
The effects of forest management on soil carbon (C) and nitrogen (N) dynamics vary by harvest type and species. We simulated long-term effects of bole-only harvesting of aspen (Populus tremuloides) on stand productivity and interaction of CN cycles with a multiple model approach. Five models, Biome-BGC, CENTURY, FORECAST, LANDIS-II with Century-based soil dynamics, and PnET-CN, were run for 350 yr with seven harvesting events on nutrient-poor, sandy soils representing northwestern Wisconsin, United States. Twenty CN state and flux variables were summarized from the models' outputs and statistically analyzed using ordination and variance analysis methods. The multiple models' averages suggest that bole-only harvest would not significantly affect long-term site productivity of aspen, though declines in soil organic matter and soil N were significant. Along with direct N removal by harvesting, extensive leaching after harvesting before canopy closure was another major cause of N depletion. These five models were notably different in output values of the 20 variables examined, although there were some similarities for certain variables. PnET-CN produced unique results for every variable, and CENTURY showed fewer outliers and similar temporal patterns to the mean of all models. In general, we demonstrated that when there are no site-specific data for fine-scale calibration and evaluation of a single model, the multiple model approach may be a more robust approach for long-term simulations. In addition, multimodeling may also improve the calibration and evaluation of an individual model.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-09-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.
Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi
2004-04-01
The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.
NASA Astrophysics Data System (ADS)
Kumar, Anuruddh; Chauhan, Aditya; Vaish, Rahul; Kumar, Rajeev; Jain, Satish Chandra
2018-01-01
Flexoelectricity is a phenomenon which allows all crystalline dielectric materials to exhibit strain-induced polarization. With recent articles reporting giant flexoelectric coupling coefficients for various ferroelectric materials, this field must be duly investigated for its application merits. In this study, a wide-band linear energy harvesting device has been proposed using Ba0.6Sr0.4TiO3 ceramic. Both structural and material parameters were scrutinized for an optimized approach. Dynamic analysis was performed using finite element modeling to evaluate several important parameters including beam deflection, open circuit voltage and net power output. It was revealed that open circuit voltage and net power output lack correlation. Further, power output lacks a dependency on optimized width ratios, with the highest power output of 0.07 μW being observed for a width ratio of 0.33 closely followed by ratios of 0.2 and 0.5 (˜0.07 μW) each. The resulting power was generated at discrete (resonant) frequencies lacking a broadband structure. A compound design with integrated beams was proposed to overcome this drawback. The finalized design is capable of a maximum power output of >0.04 μW with an operational frequency of 90-110 Hz, thus allowing for a higher power output in a broader frequency range.
Effects of climate change on the economic output of the Longjing-43 tea tree, 1972-2013.
Lou, Weiping; Sun, Shanlei; Wu, Lihong; Sun, Ke
2015-05-01
Based on phenological and economic output models established and meteorological data from 1972 to 2013, changes in the phenology, frost risk, and economic output of the Longjing-43 tea tree in the Yuezhou Longjing tea production area of China were evaluated. As the local climate has changed, the beginning dates of tea bud and leaf plucking of this cultivar in all five counties studied has advanced significantly by -1.28 to -0.88 days/decade, with no significant change in the risk of frost. The main tea-producing stages in the tea production cycle include the plucking periods for superfine, grade 1, and grade 2 buds and leaves. Among the five bud and leaf grades, the economic output of the plucking periods for superfine and grade 1 decreased significantly, that for grade 2 showed no significant change, and those for grades 3 and 4 increased significantly. The economic output of large-area tea plantations employing an average of 45 workers per hectare and producing superfine to grade 2 buds and leaves were significantly reduced by 6,745-8,829 yuan/decade/ha, depending on the county. Those tea farmers who planted tea trees on their own small land holdings and produced superfine to grade 4 tea buds and leaves themselves experienced no significant decline in economic output.
[Air pollution in an urban area nearby the Rome-Ciampino city airport].
Di Menno di Bucchianico, Alessandro; Cattani, Giorgio; Gaeta, Alessandra; Caricchia, Anna Maria; Troiano, Francesco; Sozzi, Roberto; Bolignano, Andrea; Sacco, Fabrizio; Damizia, Sesto; Barberini, Silvia; Caleprico, Roberta; Fabozzi, Tina; Ancona, Carla; Ancona, Laura; Cesaroni, Giulia; Forastiere, Francesco; Gobbi, Gian Paolo; Costabile, Francesca; Angelini, Federico; Barnaba, Francesca; Inglessis, Marco; Tancredi, Francesco; Palumbo, Lorenzo; Fontana, Luca; Bergamaschi, Antonio; Iavicoli, Ivo
2014-01-01
to assess air pollution spatial and temporal variability in the urban area nearby the Ciampino International Airport (Rome) and to investigate the airport-related emissions contribute. the study domain was a 64 km2 area around the airport. Two fifteen-day monitoring campaigns (late spring, winter) were carried out. Results were evaluated using several runs outputs of an airport-related sources Lagrangian particle model and a photochemical model (the Flexible Air quality Regional Model, FARM). both standard and high time resolution air pollutant concentrations measurements: CO, NO, NO2, C6H6, mass and number concentration of several PM fractions. 46 fixed points (spread over the study area) of NO2 and volatile organic compounds concentrations (fifteen days averages); deterministic models outputs. standard time resolution measurements, as well as model outputs, showed the airport contribution to air pollution levels being little compared to the main source in the area (i.e. vehicular traffic). However, using high time resolution measurements, peaks of particles associated with aircraft takeoff (total number concentration and soot mass concentration), and landing (coarse mass concentration) were observed, when the site measurement was downwind to the runway. the frequently observed transient spikes associated with aircraft movements could lead to a not negligible contribute to ultrafine, soot and coarse particles exposure of people living around the airport. Such contribute and its spatial and temporal variability should be investigated when assessing the airports air quality impact.
Performance evaluation of nonhomogeneous hospitals: the case of Hong Kong hospitals.
Li, Yongjun; Lei, Xiyang; Morton, Alec
2018-02-14
Throughout the world, hospitals are under increasing pressure to become more efficient. Efficiency analysis tools can play a role in giving policymakers insight into which units are less efficient and why. Many researchers have studied efficiencies of hospitals using data envelopment analysis (DEA) as an efficiency analysis tool. However, in the existing literature on DEA-based performance evaluation, a standard assumption of the constant returns to scale (CRS) or the variable returns to scale (VRS) DEA models is that decision-making units (DMUs) use a similar mix of inputs to produce a similar set of outputs. In fact, hospitals with different primary goals supply different services and provide different outputs. That is, hospitals are nonhomogeneous and the standard assumption of the DEA model is not applicable to the performance evaluation of nonhomogeneous hospitals. This paper considers the nonhomogeneity among hospitals in the performance evaluation and takes hospitals in Hong Kong as a case study. An extension of Cook et al. (2013) [1] based on the VRS assumption is developed to evaluated nonhomogeneous hospitals' efficiencies since inputs of hospitals vary greatly. Following the philosophy of Cook et al. (2013) [1], hospitals are divided into homogeneous groups and the product process of each hospital is divided into subunits. The performance of hospitals is measured on the basis of subunits. The proposed approach can be applied to measure the performance of other nonhomogeneous entities that exhibit variable return to scale.
NASA Astrophysics Data System (ADS)
Pasculli, Antonio; Audisio, Chiara; Sciarra, Nicola
2017-12-01
In the alpine contest, the estimation of the rainfall (inflow) and the discharge (outflow) data are very important in order to, at least, analyse historical time series at catchment scale; determine the hydrological maximum and minimum estimate flood and drought frequency. Hydrological researches become a precious source of information for various human activities, in particular for land use management and planning. Many rainfall- runoff models have been proposed to reflect steady, gradually-varied flow condition inside a catchment. In these last years, the application of Reduced Complexity Models (RCM) has been representing an excellent alternative resource for evaluating the hydrological response of catchments, within a period of time up to decades. Hence, this paper is aimed at the discussion of the application of the research code CAESAR, based on cellular automaton (CA) approach, in order to evaluate the water and the sediment outputs from an alpine catchment (Soana, Italy), selected as test case. The comparison between the predicted numerical results, developed through parametric analysis, and the available measured data are discussed. Finally, the analysis of a numerical estimate of the sediment budget over ten years is presented. The necessity of a fast, but reliable numerical support when the measured data are not so easily accessible, as in Alpine catchments, is highlighted.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Liu, Z.; Rajib, M. A.; Jafarzadegan, K.; Merwade, V.
2015-12-01
Application of land surface/hydrologic models within an operational flood forecasting system can provide probable time of occurrence and magnitude of streamflow at specific locations along a stream. Creating time-varying spatial extent of flood inundation and depth requires the use of a hydraulic or hydrodynamic model. Models differ in representing river geometry and surface roughness which can lead to different output depending on the particular model being used. The result from a single hydraulic model provides just one possible realization of the flood extent without capturing the uncertainty associated with the input or the model parameters. The objective of this study is to compare multiple hydraulic models toward generating ensemble flood inundation extents. Specifically, relative performances of four hydraulic models, including AutoRoute, HEC-RAS, HEC-RAS 2D, and LISFLOOD are evaluated under different geophysical conditions in several locations across the United States. By using streamflow output from the same hydrologic model (SWAT in this case), hydraulic simulations are conducted for three configurations: (i) hindcasting mode by using past observed weather data at daily time scale in which models are being calibrated against USGS streamflow observations, (ii) validation mode using near real-time weather data at sub-daily time scale, and (iii) design mode with extreme streamflow data having specific return periods. Model generated inundation maps for observed flood events both from hindcasting and validation modes are compared with remotely sensed images, whereas the design mode outcomes are compared with corresponding FEMA generated flood hazard maps. The comparisons presented here will give insights on probable model-specific nature of biases and their relative advantages/disadvantages as components of an operational flood forecasting system.
ERIC Educational Resources Information Center
Welty, Gordon A.
The logic of the evaluation of educational and other action programs is discussed from a methodological viewpoint. However, no attempt is made to develop methods of evaluating programs. In Part I, the structure of an educational program is viewed as a system with three components--inputs, transformation of inputs into outputs, and outputs. Part II…
NASA Astrophysics Data System (ADS)
Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng
2018-05-01
Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-07-01
The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Effects of Climate Change on Flood Frequency in the Pacific Northwest
NASA Astrophysics Data System (ADS)
Gergel, D. R.; Stumbaugh, M. R.; Lee, S. Y.; Nijssen, B.; Lettenmaier, D. P.
2014-12-01
A key concern about climate change as related to water resources is the potential for changes in hydrologic extremes, including flooding. We explore changes in flood frequency in the Pacific Northwest using downscaled output from ten Global Climate Models (GCMs) from the Coupled Model Inter-Comparison Project 5 (CMIP5) for historical forcings (1950-2005) and future Representative Concentration Pathways (RCPs) 4.5 and 8.5 (2006-2100). We use archived output from the Integrated Scenarios Project (ISP) (http://maca.northwestknowledge.net/), which uses the Multivariate Adaptive Constructed Analogs (MACA) method for statistical downscaling. The MACA-downscaled GCM output was then used to force the Variable Infiltration Capacity (VIC) hydrology model with a 1/16th degree spatial resolution and a daily time step. For each of the 238 HUC-08 areas within the Pacific Northwest (USGS Hydrologic Region 15), we computed, from the ISP archive, the series of maximum daily runoff values (surrogate for the annual maximum flood), and then the mean annual flood. Finally, we computed the ratios of the RCP4.5 and RCP8.5 mean annual floods to their corresponding values for the historical period. We evaluate spatial patterns in the results. For snow-dominated watersheds, the changes are dominated by reductions in flood frequency in basins that currently have spring-dominant floods, and increases in snow affected basins with fall-dominant floods. In low elevation basins west of the Cascades, changes in flooding are more directly related to changes in precipitation extremes. We further explore the nature of these effects by evaluating the mean Julian day of the annual maximum flood for each HUC-08 and how this changes between the historical and RCP4.5 and RCP8.5 scenarios.
NASA Astrophysics Data System (ADS)
Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi
2017-09-01
Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.
NASA Astrophysics Data System (ADS)
Bisht, K.; Dodamani, S. S.
2016-12-01
Modelling of Land Surface Temperature is essential for short term and long term management of environmental studies and management activities of the Earth's resources. The objective of this research is to estimate and model Land Surface Temperatures (LST). For this purpose, Landsat 7 ETM+ images period from 2007 to 2012 were used for retrieving LST and processed through MATLAB software using Mamdani fuzzy inference systems (MFIS), which includes pre-monsoon and post-monsoon LST in the fuzzy model. The Mangalore City of Karnataka state, India has been taken for this research work. Fuzzy model inputs are considered as the pre-monsoon and post-monsoon retrieved temperatures and LST was chosen as output. In order to develop a fuzzy model for LST, seven fuzzy subsets, nineteen rules and one output are considered for the estimation of weekly mean air temperature. These are very low (VL), low (L), medium low (ML), medium (M), medium high (MH), high (H) and very high (VH). The TVX (Surface Temperature Vegetation Index) and the empirical method have provided estimated LST. The study showed that the Fuzzy model M4/7-19-1 (model 4, 7 fuzzy sets, 19 rules and 1 output) which developed over Mangalore City has provided more accurate outcomes than other models (M1, M2, M3, M5). The result of this research was evaluated according to statistical rules. The best correlation coefficient (R) and root mean squared error (RMSE) between estimated and measured values for pre-monsoon and post-monsoon LST found to be 0.966 - 1.607 K and 0.963- 1.623 respectively.
Evaluation of coral reef carbonate production models at a global scale
NASA Astrophysics Data System (ADS)
Jones, N. S.; Ridgwell, A.; Hendy, E. J.
2015-03-01
Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle; it is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are needed for understanding how carbonate deposition responds to environmental conditions including atmospheric CO2 concentrations in the past and into the future. However, before any projections can be made, the basic test is to establish model skill in recreating present-day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales. We also compile available global data on reef calcification to produce an independent observation-based data set for the model evaluation of carbonate budget outputs. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically developed global framework, the Global Reef Accretion Model (GRAM). No model was able to reproduce independent rate estimates of whole-reef calcification, and the output from the temperature-only based approach was the only model to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modelling approach, accounting for population dynamics in terms of mortality and recruitment and hence calcifier abundance, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.
NASA Astrophysics Data System (ADS)
Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.
2008-12-01
Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of crop performance and regional water balance dynamics.
NASA Astrophysics Data System (ADS)
Porto, P.; Cogliandro, V.; Callegari, G.
2018-01-01
In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.
A Cost-Effectiveness/Benefit Analysis Model for Postsecondary Vocational Programs. Technical Report.
ERIC Educational Resources Information Center
Kim, Jin Eun
A cost-effectiveness/benefit analysis is defined as a technique for measuring the outputs of existing and new programs in relation to their specified program objectives, against the costs of those programs. In terms of its specific use, the technique is conceptualized as a systems analysis method, an evaluation method, and a planning tool for…
1992-03-15
Pipes, Computer Modelling, Nondestructive Testing. Tomography , Planar Converter, Cesium Reservoir 19. ABSTRACT (Continue on reverse if necessary and...Investigation ........................ 32 4.3 Computed Tomography ................................ 33 4.4 X-Ray Radiography...25 3.4 LEOS generated output data for Mo-Re converter ................ 26 4.1 Distance along converter imaged by the computed tomography
Evaluation of infiltration models in contaminated landscape.
Sadegh Zadeh, Kouroush; Shirmohammadi, Adel; Montas, Hubert J; Felton, Gary
2007-06-01
The infiltration models of Kostiakov, Green-Ampt, and Philip (two and three terms equations) were used, calibrated, and evaluated to simulate in-situ infiltration in nine different soil types. The Osborne-Moré modified version of the Levenberg-Marquardt optimization algorithm was coupled with the experimental data obtained by the double ring infiltrometers and the infiltration equations, to estimate the model parameters. Comparison of the model outputs with the experimental data indicates that the models can successfully describe cumulative infiltration in different soil types. However, since Kostiakov's equation fails to accurately simulate the infiltration rate as time approaches infinity, Philip's two-term equation, in some cases, produces negative values for the saturated hydraulic conductivity of soils, and the Green-Ampt model uses piston flow assumptions, we suggest using Philip's three-term equation to simulate infiltration and to estimate the saturated hydraulic conductivity of soils.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
NASA Astrophysics Data System (ADS)
Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.
2017-12-01
An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.
Acoustic Modeling for Aqua Ventus I off Monhegan Island, ME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Jonathan M.; Hanna, Luke A.; DeChello, Nicole L.
2013-10-31
The DeepCwind consortium, led by the University of Maine, was awarded funding under the US Department of Energy’s Offshore Wind Advanced Technology Demonstration Program to develop two floating offshore wind turbines in the Gulf of Maine equipped with Goldwind 6 MW direct drive turbines, as the Aqua Ventus I project. The Goldwind turbines have a hub height of 100 m. The turbines will be deployed in Maine State waters, approximately 2.9 miles off Monhegan Island; Monhegan Island is located roughly 10 miles off the coast of Maine. In order to site and permit the offshore turbines, the acoustic output mustmore » be evaluated to ensure that the sound will not disturb residents on Monhegan Island, nor input sufficient sound levels into the nearby ocean to disturb marine mammals. This initial assessment of the acoustic output focuses on the sound of the turbines in air by modeling the assumed sound source level, applying a sound propagation model, and taking into account the distance from shore.« less
Castellet, Lledó; Molinos-Senante, María
2016-02-01
The assessment of the efficiency of wastewater treatment plants (WWTPs) is essential to compare their performance and consequently to identify the best operational practices that can contribute to the reduction of operational costs. Previous studies have evaluated the efficiency of WWTPs using conventional data envelopment analysis (DEA) models. Most of these studies have considered the operational costs of the WWTPs as inputs, while the pollutants removed from wastewater are treated as outputs. However, they have ignored the fact that each pollutant removed by a WWTP involves a different environmental impact. To overcome this limitation, this paper evaluates for the first time the efficiency of a sample of WWTPs by applying the weighted slacks-based measure model. It is a non-radial DEA model which allows assigning weights to the inputs and outputs according their importance. Thus, the assessment carried out integrates environmental issues with the traditional "techno-economic" efficiency assessment of WWTPs. Moreover, the potential economic savings for each cost item have been quantified at a plant level. It is illustrated that the WWTPs analyzed have significant room to save staff and energy costs. Several managerial implications to help WWTPs' operators make informed decisions were drawn from the methodology and empirical application carried out. Copyright © 2015 Elsevier Ltd. All rights reserved.
A reporting protocol for thermochronologic modeling illustrated with data from the Grand Canyon
NASA Astrophysics Data System (ADS)
Flowers, Rebecca M.; Farley, Kenneth A.; Ketcham, Richard A.
2015-12-01
Apatite (U-Th)/He and fission-track dates, as well as 4He/3He and fission-track length data, provide rich thermal history information. However, numerous choices and assumptions are required on the long road from raw data and observations to potentially complex geologic interpretations. This paper outlines a conceptual framework for this path, with the aim of promoting a broader understanding of how thermochronologic conclusions are derived. The tiered structure consists of thermal history model inputs at Level 1, thermal history model outputs at Level 2, and geologic interpretations at Level 3. Because inverse thermal history modeling is at the heart of converting thermochronologic data to interpretation, for others to evaluate and reproduce conclusions derived from thermochronologic results it is necessary to publish all data required for modeling, report all model inputs, and clearly and completely depict model outputs. Here we suggest a generalized template for a model input table with which to arrange, report and explain the choice of inputs to thermal history models. Model inputs include the thermochronologic data, additional geologic information, and system- and model-specific parameters. As an example we show how the origin of discrepant thermochronologic interpretations in the Grand Canyon can be better understood by using this disciplined approach.
Del Prete, Valeria; Treves, Alessandro
2002-04-01
In a previous paper we have evaluated analytically the mutual information between the firing rates of N independent units and a set of multidimensional continuous and discrete stimuli, for a finite population size and in the limit of large noise. Here, we extend the analysis to the case of two interconnected populations, where input units activate output ones via Gaussian weights and a threshold linear transfer function. We evaluate the information carried by a population of M output units, again about continuous and discrete correlates. The mutual information is evaluated solving saddle-point equations under the assumption of replica symmetry, a method that, by taking into account only the term linear in N of the input information, is equivalent to assuming the noise to be large. Within this limitation, we analyze the dependence of the information on the ratio M/N, on the selectivity of the input units and on the level of the output noise. We show analytically, and confirm numerically, that in the limit of a linear transfer function and of a small ratio between output and input noise, the output information approaches asymptotically the information carried in input. Finally, we show that the information loss in output does not depend much on the structure of the stimulus, whether purely continuous, purely discrete or mixed, but only on the position of the threshold nonlinearity, and on the ratio between input and output noise.
Dynamic network data envelopment analysis for university hospitals evaluation
Lobo, Maria Stella de Castro; Rodrigues, Henrique de Castro; André, Edgard Caires Gazzola; de Azeredo, Jônatas Almeida; Lins, Marcos Pereira Estellita
2016-01-01
ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier. PMID:27191158
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasia M. Gribik; Ronald E. Mizia; Harry Gatley
This project addresses both the technical and economic feasibility of replacing industrial gas in lime kilns with synthesis gas from the gasification of hog fuel. The technical assessment includes a materials evaluation, processing equipment needs, and suitability of the heat content of the synthesis gas as a replacement for industrial gas. The economic assessment includes estimations for capital, construction, operating, maintenance, and management costs for the reference plant. To perform these assessments, detailed models of the gasification and lime kiln processes were developed using Aspen Plus. The material and energy balance outputs from the Aspen Plus model were used asmore » inputs to both the material and economic evaluations.« less
CPAP Devices for Emergency Prehospital Use: A Bench Study.
Brusasco, Claudia; Corradi, Francesco; De Ferrari, Alessandra; Ball, Lorenzo; Kacmarek, Robert M; Pelosi, Paolo
2015-12-01
CPAP is frequently used in prehospital and emergency settings. An air-flow output minimum of 60 L/min and a constant positive pressure are 2 important features for a successful CPAP device. Unlike hospital CPAP devices, which require electricity, CPAP devices for ambulance use need only an oxygen source to function. The aim of the study was to evaluate and compare on a bench model the performance of 3 orofacial mask devices (Ventumask, EasyVent, and Boussignac CPAP system) and 2 helmets (Ventukit and EVE Coulisse) used to apply CPAP in the prehospital setting. A static test evaluated air-flow output, positive pressure applied, and FIO2 delivered by each device. A dynamic test assessed airway pressure stability during simulated ventilation. Efficiency of devices was compared based on oxygen flow needed to generate a minimum air flow of 60 L/min at each CPAP setting. The EasyVent and EVE Coulisse devices delivered significantly higher mean air-flow outputs compared with the Ventumask and Ventukit under all CPAP conditions tested. The Boussignac CPAP system never reached an air-flow output of 60 L/min. The EasyVent had significantly lower pressure excursion than the Ventumask at all CPAP levels, and the EVE Coulisse had lower pressure excursion than the Ventukit at 5, 15, and 20 cm H2O, whereas at 10 cm H2O, no significant difference was observed between the 2 devices. Estimated oxygen consumption was lower for the EasyVent and EVE Coulisse compared with the Ventumask and Ventukit. Air-flow output, pressure applied, FIO2 delivered, device oxygen consumption, and ability to maintain air flow at 60 L/min differed significantly among the CPAP devices tested. Only the EasyVent and EVE Coulisse achieved the required minimum level of air-flow output needed to ensure an effective therapy under all CPAP conditions. Copyright © 2015 by Daedalus Enterprises.
Advanced torque converters for robotics and space applications
NASA Technical Reports Server (NTRS)
1985-01-01
This report describes the results of the evaluation of a novel torque converter concept. Features of the concept include: (1) automatic and rapid adjustment of effective gear ratio in response to changes in external torque (2) maintenance of output torque at zero output velocity without loading the input power source and (3) isolation of input power source from load. Two working models of the concept were fabricated and tested, and a theoretical analysis was performed to determine the limits of performance. It was found that the devices are apparently suited to certain types of tool driver applications, such as screwdrivers, nut drivers and valve actuators. However, quantiative information was insufficient to draw final conclusion as to robotic applications.
Space qualified Nd:YAG laser (phase 1 - design)
NASA Technical Reports Server (NTRS)
Foster, J. D.; Kirk, R. F.
1971-01-01
Results of a design study and preliminary design of a space qualified Nd:YAG laser are presented. A theoretical model of the laser was developed to allow the evaluation of the effects of various parameters on its performance. Various pump lamps were evaluated and sum pumping was considered. Cooling requirements were examined and cooling methods such as radiation, cryogenic and conductive were analysed. Power outputs and efficiences of various configurations and the pump and laser lifetime are discussed. Also considered were modulation and modulating methods.
Continuously on-going regional climate hindcast simulations for impact applications
NASA Astrophysics Data System (ADS)
Anders, Ivonne; Piringer, Martin; Kaufmann, Hildegard; Knauder, Werner; Resch, Gernot; Andre, Konrad
2017-04-01
Observational data for e.g. temperature, precipitation, radiation, or wind are often used as meteorological forcing for different impact models, like e.g. crop models, urban models, economic models and energy system models. To assess a climate signal, the time period covered by the observation is often too short, they have gaps in between, and are inhomogeneous over time, due to changes in the measurements itself or in the near surrounding. Thus output from global and regional climate models can close the gap and provide homogeneous and physically consistent time series of meteorological parameters. CORDEX evaluation runs performed for the IPCC-AR5 provide a good base for the regional scale. However, with respect to climate services, continuously on-going hindcast simulations are required for regularly updated applications. The Climate Research group at the national Austrian weather service, ZAMG, is focusing on high mountain regions and, especially on the Alps. The hindcast-simulation performed with the regional climate model COSMO-CLM is forced by ERAinterim and optimized for the Alpine Region. The simulation available for the period of 1979-2015 in a spatial resolution of about 10km is prolonged ongoing and fullfils the customer's needs with respect of output variables, levels, intervals and statistical measures. One of the main tasks is to capture strong precipitation events which often occur during summer when low pressure systems develop over the Golf of Genoa, moving to the Northeast. This leads to floods and landslide events in Austria, Czech Republic and Germany. Such events are not sufficiently represented in the CORDEX-evaluation runs. ZAMG use high quality gridded precipitation and temperature data for the Alpine Region (1-6km) to evaluate the model performance. Data is provided e.g. to hydrological modellers (high water, low water), but also to assess icing capability of infrastructure or the calculation the separation distances between livestock farming and residential area.
NASA Astrophysics Data System (ADS)
Nerguizian, Vahe; Rafaf, Mustapha
2004-08-01
This article describes and provides valuable information for companies and universities with strategies to start fabricating MEMS for RF/Microwave and millimeter wave applications. The present work shows the infrastructure developed for RF/Microwave and millimeter wave MEMS platforms, which helps the identification, evaluation and selection of design tools and fabrication foundries taking into account packaging and testing. The selected and implemented simple infrastructure models, based on surface and bulk micromachining, yield inexpensive and innovative approaches for distributed choices of MEMS operating tools. With different educational or industrial institution needs, these models may be modified for specific resource changes using a careful analyzed iteration process. The inputs of the project are evaluation selection criteria and information sources such as financial, technical, availability, accessibility, simplicity, versatility and practical considerations. The outputs of the project are the selection of different MEMS design tools or software (solid modeling, electrostatic/electromagnetic and others, compatible with existing standard RF/Microwave design tools) and different MEMS manufacturing foundries. Typical RF/Microwave and millimeter wave MEMS solutions are introduced on the platform during the evaluation and development phases of the project for the validation of realistic results and operational decision making choices. The encountered challenges during the investigation and the development steps are identified and the dynamic behavior of the infrastructure is emphasized. The inputs (resources) and the outputs (demonstrated solutions) are presented in tables and flow chart mode diagrams.
Performance improvement of planar dielectric elastomer actuators by magnetic modulating mechanism
NASA Astrophysics Data System (ADS)
Zhao, Yun-Hua; Li, Wen-Bo; Zhang, Wen-Ming; Yan, Han; Peng, Zhi-Ke; Meng, Guang
2018-06-01
In this paper, a novel planar dielectric elastomer actuator (DEA) with magnetic modulating mechanism is proposed. This design can provide the availability of wider actuation range and larger output force, which are significant indicators to evaluate the performance of DEAs. The DEA tends to be a compact and simple design, and an analytical model is developed to characterize the mechanical behavior. The result shows that the output force induced by the DEA can be improved by 76.90% under a certain applied voltage and initial magnet distance. Moreover, experiments are carried out to reveal the performance of the proposed DEA and validate the theoretical model. It demonstrates that the DEA using magnetic modulating mechanism can enlarge the actuation range and has more remarkable effect with decreasing initial magnet distance within the stable range. It can be useful to promote the applications of DEAs to soft robots and haptic feedback.
O'Neill, Liam; Dexter, Franklin
2005-11-01
We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Tekscan pressure sensor output changes in the presence of liquid exposure.
Jansson, Kyle S; Michalski, Max P; Smith, Sean D; LaPrade, Robert F; Wijdicks, Coen A
2013-02-01
The purpose of the study was to evaluate the load output of a pressure sensor in the presence of liquid saturation in a controlled environment. We hypothesized that a calibrated pressure sensor would provide diminishing load outputs over time in controlled environments of both humidified air and while submerged in saline and the sensors would reach a steady state output once saturated. A consistent compressive load was repeatedly applied to pressure sensors over time (Model 4000, Tekscan, Inc., South Boston, MA) with a tensile testing machine (Instron ElectroPuls E10000, Norwood, MA). All sensors were initially calibrated in a dry environment and were tested in three groups: humid air, submerged in 0.9% saline solution, and dry. Linear regression of load output over time for the pressure sensors exposed to humidity and submerged showed a 4.6% and 4.7% decline in load output each hour for the initial 6h, respectively (β=-0.046, 95% CI: [-0.053 to -0.039]; p<0.001) (β=-0.047, 95% CI: [-0.053 to -0.042; p<0.001). Tests after 72 h of exposure had linear regression decline in load output over time of 0.40% and 0.47% per hour for humidified and submerged sensors, respectively (β=-0.004, 95% CI: [-0.006 to -0.003]; p<0.001) (β=-0.047, 95% CI: [-0.053 to -0.042]; p<0.001). Because outcomes in biomedical research can affect clinical practices and treatments, the diminishing load output of the sensor in the presence of liquids should be accounted for. We recommend soaking sensors for more than 48 h prior to testing in a moist environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Groundwater/surface-water interactions in the Bad River Watershed, Wisconsin
Leaf, Andrew T.; Fienen, Michael N.; Hunt, Randall J.; Buchwald, Cheryl A.
2015-11-23
Finally, a new data-worth analysis of potential new monitoring-well locations was performed by using the model. The relative worth of new measurements was evaluated based on their ability to increase confidence in model predictions of groundwater levels and base flows at 35 locations, under the condition of a proposed open-pit iron mine. Results of the new data-worth analysis, and other inputs and outputs from the Bad River model, are available through an online dynamic web mapping service at (http://wim.usgs.gov/badriver/).
Modeling of the spectral evolution in a narrow-linewidth fiber amplifier
NASA Astrophysics Data System (ADS)
Liu, Wei; Kuang, Wenjun; Jiang, Man; Xu, Jiangming; Zhou, Pu; Liu, Zejin
2016-03-01
Efficient numerical modeling of the spectral evolution in a narrow-linewidth fiber amplifier is presented. By describing the seeds using a statistical model and simulating the amplification process through power balanced equations combined with the nonlinear Schrödinger equations, the spectral evolution of different seeds in the fiber amplifier can be evaluated accurately. The simulation results show that the output spectra are affected by the temporal stability of the seeds and the seeds with constant amplitude in time are beneficial to maintain the linewidth of the seed in the fiber amplifier.
NASA Astrophysics Data System (ADS)
Zhao, Qiang; Gao, Qian; Zhu, Mingyue; Li, Xiumei
2018-06-01
Water resources carrying capacity is the maximum available water resources supporting by the social and economic development. Based on investigating and statisticing on the current situation of water resources in Shandong Province, this paper selects 13 factors including per capita water resources, water resources utilization, water supply modulus, rainfall, per capita GDP, population density, per capita water consumption, water consumption per million yuan, The water consumption of industrial output value, the agricultural output value of farmland, the irrigation rate of cultivated land, the water consumption rate of ecological environment and the forest coverage rate were used as the evaluation factors. Then,the fuzzy comprehensive evaluation model was used to analyze the water resources carrying capacity Force status evaluation. The results showed : The comprehensive evaluation results of water resources in Shandong Province were lower than 0.6 in 2001-2009 and higher than 0.6 in 2010-2015, which indicating that the water resources carrying capacity of Shandong Province has been improved.; In addition, most of the years a value of less than 0.6, individual years below 0.4, the interannual changes are relatively large, from that we can see the level of water resources is generally weak, the greater the interannual changes in Shandong Province.
Clasen, Stephan; Schmidt, Diethard; Boss, Andreas; Dietz, Klaus; Kröber, Stefan M; Claussen, Claus D; Pereira, Philippe L
2006-03-01
To evaluate the size and geometry of thermally induced coagulation by using multipolar radiofrequency (RF) ablation and to determine a mathematic model to predict coagulation volume. Multipolar RF ablations (n = 80) were performed in ex vivo bovine livers by using three internally cooled bipolar applicators with two electrodes on the same shaft. Applicators were placed in a triangular array (spacing, 2-5 cm) and were activated in multipolar mode (power output, 75-225 W). The size and geometry of the coagulation zone, together with ablation time, were assessed. Mathematic functions were fitted, and the goodness of fit was assessed by using r(2). Coagulation volume, short-axis diameter, and ablation time were dependent on power output and applicator distance. The maximum zone of coagulation (volume, 324 cm(3); short-axis diameter, 8.4 cm; ablation time, 193 min) was induced with a power output of 75 W at an applicator distance of 5 cm. Coagulation volume and ablation time decreased as power output increased. Power outputs of 100-125 W at applicator distances of 2-4 cm led to a reasonable compromise between coagulation volume and ablation time. At 2 cm (100 W), coagulation volume, short-axis diameter, and ablation time were 66 cm(3), 4.5 cm, and 19 min, respectively; at 3 cm (100 W), 90 cm(3), 5.2 cm, and 22 min, respectively; at 4 cm (100 W), 132 cm(3), 6.1 cm, and 27 min, respectively; at 2 cm (125 W), 56 cm(3), 4.2 cm, and 9 min, respectively; at 3 cm (125 W), 73 cm(3), 4.9 cm, and 12 min, respectively; and at 4 cm (125 W), 103 cm(3), 5.5 cm, and 16 min, respectively. At applicator distances of 4 cm (>125 W) and 5 cm (>100 W), the zones of coagulation were not confluent. Coagulation volume (r(2) = 0.80) and RF ablation time (r(2) = 0.93) were determined by using the mathematic model. Multipolar RF ablation with three bipolar applicators may produce large volumes of confluent coagulation ex vivo. A compromise is necessary between prolonged RF ablations at lower power outputs, which produce larger volumes of coagulation, and faster RF ablations at higher power outputs, which produce smaller volumes of coagulation. Copyright RSNA, 2006.
Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation
NASA Astrophysics Data System (ADS)
Stephenson, Jerry L.; Kapraun, Chris
Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.
HEAVY-DUTY GREENHOUSE GAS EMISSIONS MODEL ...
Class 2b-8 vocational truck manufacturers and Class 7/8 tractor manufacturers would be subject to vehicle-based fuel economy and emission standards that would use a truck simulation model to evaluate the impact of the truck tires and/or tractor cab design on vehicle compliance with any new standards. The EPA has created a model called “GHG Emissions Model (GEM)”, which is specifically tailored to predict truck GHG emissions. As the model is designed for the express purpose of vehicle compliance demonstration, it is less configurable than similar commercial products and its only outputs are GHG emissions and fuel consumption. This approach gives a simple and compact tool for vehicle compliance without the overhead and costs of a more sophisticated model. Evaluation of both fuel consumption and CO2 emissions from heavy-duty highway vehicles through a whole-vehicle operation simulation model.
The future of the Devon Ice cap: results from climate and ice dynamics modelling
NASA Astrophysics Data System (ADS)
Mottram, Ruth; Rodehacke, Christian; Boberg, Fredrik
2017-04-01
The Devon Ice Cap is an example of a relatively well monitored small ice cap in the Canadian Arctic. Close to Greenland, it shows a similar surface mass balance signal to glaciers in western Greenland. Here we use high resolution (5km) simulations from HIRHAM5 to drive the PISM glacier model in order to model the present day and future prospects of this small Arctic ice cap. Observational data from the Devon Ice Cap in Arctic Canada is used to evaluate the surface mass balance (SMB) data output from the HIRHAM5 model for simulations forced with the ERA-Interim climate reanalysis data and the historical emissions scenario run by the EC-Earth global climate model. The RCP8.5 scenario simulated by EC-Earth is also downscaled by HIRHAM5 and this output is used to force the PISM model to simulate the likely future evolution of the Devon Ice Cap under a warming climate. We find that the Devon Ice Cap is likely to continue its present day retreat, though in the future increased precipitation partly offsets the enhanced melt rates caused by climate change.
Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques
NASA Astrophysics Data System (ADS)
Spencer, E.; Rao, A.; Horton, W.; Mays, L.
2008-12-01
ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201
Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure
NASA Astrophysics Data System (ADS)
Tsai, C.; Yeh, J. J. J.
2017-12-01
A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.
Evaluating wind extremes in CMIP5 climate models
NASA Astrophysics Data System (ADS)
Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.
2015-07-01
Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.; Mark, R. G.
2002-01-01
Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model
NASA Astrophysics Data System (ADS)
Washington, M. H.; Kumar, S.
2017-12-01
The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.
Lim, Einly; Salamonsen, Robert Francis; Mansouri, Mahdi; Gaddum, Nicholas; Mason, David Glen; Timms, Daniel L; Stevens, Michael Charles; Fraser, John; Akmeliawati, Rini; Lovell, Nigel Hamilton
2015-02-01
The present study investigates the response of implantable rotary blood pump (IRBP)-assisted patients to exercise and head-up tilt (HUT), as well as the effect of alterations in the model parameter values on this response, using validated numerical models. Furthermore, we comparatively evaluate the performance of a number of previously proposed physiologically responsive controllers, including constant speed, constant flow pulsatility index (PI), constant average pressure difference between the aorta and the left atrium, constant average differential pump pressure, constant ratio between mean pump flow and pump flow pulsatility (ratioP I or linear Starling-like control), as well as constant left atrial pressure ( P l a ¯ ) control, with regard to their ability to increase cardiac output during exercise while maintaining circulatory stability upon HUT. Although native cardiac output increases automatically during exercise, increasing pump speed was able to further improve total cardiac output and reduce elevated filling pressures. At the same time, reduced venous return associated with upright posture was not shown to induce left ventricular (LV) suction. Although P l a ¯ control outperformed other control modes in its ability to increase cardiac output during exercise, it caused a fall in the mean arterial pressure upon HUT, which may cause postural hypotension or patient discomfort. To the contrary, maintaining constant average pressure difference between the aorta and the left atrium demonstrated superior performance in both exercise and HUT scenarios. Due to their strong dependence on the pump operating point, PI and ratioPI control performed poorly during exercise and HUT. Our simulation results also highlighted the importance of the baroreflex mechanism in determining the response of the IRBP-assisted patients to exercise and postural changes, where desensitized reflex response attenuated the percentage increase in cardiac output during exercise and substantially reduced the arterial pressure upon HUT. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Jabor, A; Vlk, T; Boril, P
1996-04-15
We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.
Operation quality assessment model for video conference system
NASA Astrophysics Data System (ADS)
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
2018-01-01
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Evaluating the utility of dynamical downscaling in agricultural impacts projections
Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.
2014-01-01
Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455
Empirically evaluating decision-analytic models.
Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J
2010-08-01
Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.
Appropriate Measures of Productivity and Output for the Evaluation of Transit Demonstration Projects
DOT National Transportation Integrated Search
1982-03-01
Output and productivity, two economic concepts that have important applications in the evaluation of transportation demonstrations, are discussed in this paper. The focus of these discussions is on how the terms' typical definitions in transportation...
Description and evaluation of the QUIC bio-slurry scheme: droplet evaporation and surface deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zajic, Dragan; Brown, Michael J; Nelson, Matthew A
2010-01-01
The Quick Urban and Industrial Complex (QUIC) dispersion modeling system was developed with the goal of improving the transport and dispersion modeling capabilities within urban areas. The modeling system has the ability to rapidly obtain a detailed 3D flow field around building clusters and uses an urbanized Lagrangian random-walk approach to account for transport and dispersion (e.g., see Singh et al., 2008; Williams et al., 2009; and Gowardhan et al., 2009). In addition to wind-tunnel testing, the dispersion modeling system has been evaluated against full-scale urban tracer experiments performed in Salt Lake City, Oklahoma City, and New York City (Gowardhanmore » et al., 2006; Gowardhan et al., 2009; Allwine et al., 2008) and the wind model output to measurements taken in downtown Oklahoma City.« less
Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model
2017-03-01
set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation
Model format for a vaccine stability report and software solutions.
Shin, Jinho; Southern, James; Schofield, Timothy
2009-11-01
A session of the International Association for Biologicals Workshop on Stability Evaluation of Vaccine, a Life Cycle Approach was devoted to a model format for a vaccine stability report, and software solutions. Presentations highlighted the utility of a model format that will conform to regulatory requirements and the ICH common technical document. However, there need be flexibility to accommodate individual company practices. Adoption of a model format is premised upon agreement regarding content between industry and regulators, and ease of use. Software requirements will include ease of use and protections against inadvertent misspecification of stability design or misinterpretation of program output.
Sojda, R.S.
2007-01-01
Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.
NASA Astrophysics Data System (ADS)
Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.
2012-04-01
Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Growth and yield model application in tropical rain forest management
James Atta-Boateng; John W., Jr. Moser
2000-01-01
Analytical tools are needed to evaluate the impact of management policies on the sustainable use of rain forest. Optimal decisions concerning the level of management inputs require accurate predictions of output at all relevant input levels. Using growth data from 40 l-hectare permanent plots obtained from the semi-deciduous forest of Ghana, a system of 77 differential...
USDA-ARS?s Scientific Manuscript database
Ozone (O3) is a natural antimicrobial agent with potential applications in food industry. In this study, inactivation of Bacillus cereus and Salmonella enterica Typhimurium by aqueous ozone was evaluated. Ozone gas was generated using a domestic ozone generator with an output of 200 mg/hr (approx. 0...
Evaluating alternative prescribed burning policies to reduce net economic damages from wildfire
D. Evan Mercer; Jeffrey P. Prestemon; David T. Butry; John M. Pye
2007-01-01
We estimate a wildfire risk model with a new measure of wildfire output, intensity-weighted risk and use it in Monte Carlo simulations to estimate welfare changes from alternative prescribed burning policies. Using Volusia County, Florida as a case study, an annual prescribed burning rate of 13% of all forest lands maximizes net welfare; ignoring the effects on...
Multicoordination Control Strategy Performance in Hybrid Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pezzini, Paolo; Bryden, Kenneth M.; Tucker, David
This paper evaluates a state-space methodology of a multi-input multi-output (MIMO) control strategy using a 2 × 2 tightly coupled scenario applied to a physical gas turbine fuel cell hybrid power system. A centralized MIMO controller was preferred compared to a decentralized control approach because previous simulation studies showed that the coupling effect identified during the simultaneous control of the turbine speed and cathode airflow was better minimized. The MIMO controller was developed using a state-space dynamic model of the system that was derived using first-order transfer functions empirically obtained through experimental tests. The controller performance was evaluated in termsmore » of disturbance rejection through perturbations in the gas turbine operation, and setpoint tracking maneuver through turbine speed and cathode airflow steps. The experimental results illustrate that a multicoordination control strategy was able to mitigate the coupling of each actuator to each output during the simultaneous control of the system, and improved the overall system performance during transient conditions. On the other hand, the controller showed different performance during validation in simulation environment compared to validation in the physical facility, which will require a better dynamic modeling of the system for the implementation of future multivariable control strategies.« less
Multicoordination Control Strategy Performance in Hybrid Power Systems
Pezzini, Paolo; Bryden, Kenneth M.; Tucker, David
2018-04-11
This paper evaluates a state-space methodology of a multi-input multi-output (MIMO) control strategy using a 2 × 2 tightly coupled scenario applied to a physical gas turbine fuel cell hybrid power system. A centralized MIMO controller was preferred compared to a decentralized control approach because previous simulation studies showed that the coupling effect identified during the simultaneous control of the turbine speed and cathode airflow was better minimized. The MIMO controller was developed using a state-space dynamic model of the system that was derived using first-order transfer functions empirically obtained through experimental tests. The controller performance was evaluated in termsmore » of disturbance rejection through perturbations in the gas turbine operation, and setpoint tracking maneuver through turbine speed and cathode airflow steps. The experimental results illustrate that a multicoordination control strategy was able to mitigate the coupling of each actuator to each output during the simultaneous control of the system, and improved the overall system performance during transient conditions. On the other hand, the controller showed different performance during validation in simulation environment compared to validation in the physical facility, which will require a better dynamic modeling of the system for the implementation of future multivariable control strategies.« less
Measurement of Trailing Edge Noise Using Directional Array and Coherent Output Power Methods
NASA Technical Reports Server (NTRS)
Hutcheson, Florence V.; Brooks, Thomas F.
2002-01-01
The use of a directional (or phased) array of microphones for the measurement of trailing edge (TE) noise is described and tested. The capabilities of this method arc evaluated via measurements of TE noise from a NACA 63-215 airfoil model and from a cylindrical rod. This TE noise measurement approach is compared to one that is based on thc cross spectral analysis of output signals from a pair of microphones placed on opposite sides of an airframe model (COP method). Advantages and limitations of both methods arc examined. It is shown that the microphone array can accurately measures TE noise and captures its two-dimensional characteristic over a large frequency range for any TE configuration as long as noise contamination from extraneous sources is within bounds. The COP method is shown to also accurately measure TE noise but over a more limited frequency range that narrows for increased TE thickness. Finally, the applicability and generality of an airfoil self-noise prediction method was evaluated via comparison to the experimental data obtained using the COP and array measurement methods. The predicted and experimental results are shown to agree over large frequency ranges.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Acquisition Management for Systems-of-Systems: Exploratory Model Development and Experimentation
2009-04-22
outputs of the Requirements Development and Logical Analysis processes into alternative design solutions and selects a final design solution. Decision...Analysis Provides the basis for evaluating and selecting alternatives when decisions need to be made. Implementation Yields the lowest-level system... Dependenc y Matrix 1 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ 011 100 110 2 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ 000 100 100 a) Example of SoS b) Model Structure for Example SoS
1986-08-01
publication by Ms. Jessica S. Ruff, Information Products Division, WES. This manual is published in loose-leaf format for convenience in ." ._ periodic...transfer computations. m. Variety of output options. Background 8. This manual is a product of a program of evaluation and refinement of mathematical water...zooplankton and higher order herbivores. However, these groups are presently not included in the model. Macrophyte production may also have an impact upon
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
NASA Astrophysics Data System (ADS)
Mandal, Sumantra; Sivaprasad, P. V.; Venugopal, S.; Murthy, K. P. N.
2006-09-01
An artificial neural network (ANN) model is developed to predict the constitutive flow behaviour of austenitic stainless steels during hot deformation. The input parameters are alloy composition and process variables whereas flow stress is the output. The model is based on a three-layer feed-forward ANN with a back-propagation learning algorithm. The neural network is trained with an in-house database obtained from hot compression tests on various grades of austenitic stainless steels. The performance of the model is evaluated using a wide variety of statistical indices. Good agreement between experimental and predicted data is obtained. The correlation between individual alloying elements and high temperature flow behaviour is investigated by employing the ANN model. The results are found to be consistent with the physical phenomena. The model can be used as a guideline for new alloy development.
Hayes, Holly; Parchman, Michael L.; Howard, Ray
2012-01-01
Evaluating effective growth and development of a Practice-Based Research Network (PBRN) can be challenging. The purpose of this article is to describe the development of a logic model and how the framework has been used for planning and evaluation in a primary care PBRN. An evaluation team was formed consisting of the PBRN directors, staff and its board members. After the mission and the target audience were determined, facilitated meetings and discussions were held with stakeholders to identify the assumptions, inputs, activities, outputs, outcomes and outcome indicators. The long-term outcomes outlined in the final logic model are two-fold: 1.) Improved health outcomes of patients served by PBRN community clinicians; and 2.) Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the PBRN mission will be successfully implemented. PMID:21900441
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
A Nested Nearshore Nutrient Model (N&Sup3;M) for ...
Nearshore conditions drive phenomena like harmful algal blooms (HABs), and the nearshore and coastal margin are the parts of the Great Lakes most used by humans. To assess conditions, optimize monitoring, and evaluate management options, a model of nearshore nutrient transport and algal dynamics is being developed. The model targets a “regional” spatial scale, similar to the Great Lakes Aquatic Habitat Framework's sub-basins, which divide the nearshore into 30 regions. Model runs are 365 days, a whole season temporal scale, reporting at 3 hour intervals. N³M uses output from existing hydrodynamic models and simple transport kinetics. The nutrient transport component of this model is largely complete, and is being tested with various hydrodynamic data sets. The first test case covers a 200 km² area between two major tributaries to Lake Michigan, the Grand and Muskegon. N³M currently simulates phosphorous and chloride, selected for their distinct in-lake transport dynamics; nitrogen will be added. Initial results for 2003, 2010, and 2015 show encouraging correlations with field measurements. Initially implemented in MatLab, the model is currently implemented in Python and leverages multi-processor computation. The 4D in-browser visualizer Cesium is used to view model output, time varying satellite imagery, and field observations. not applicable
Design of vaccination and fumigation on Host-Vector Model by input-output linearization method
NASA Astrophysics Data System (ADS)
Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning
2017-03-01
Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.
Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.
Fintelman, D M; Sterling, M; Hemida, H; Li, F-X
2014-06-03
The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.
Implementation of Web Processing Services (WPS) over IPSL Earth System Grid Federation (ESGF) node
NASA Astrophysics Data System (ADS)
Kadygrov, Nikolay; Denvil, Sebastien; Carenton, Nicolas; Levavasseur, Guillaume; Hempelmann, Nils; Ehbrecht, Carsten
2016-04-01
The Earth System Grid Federation (ESGF) is aimed to provide access to climate data for the international climate community. ESGF is a system of distributed and federated nodes that dynamically interact with each other. ESGF user may search and download climatic data, geographically distributed over the world, from one common web interface and through standardized API. With the continuous development of the climate models and the beginning of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), the amount of data available from ESGF will continuously increase during the next 5 years. IPSL holds a replication of the different global and regional climate models output, observations and reanalysis data (CMIP5, CORDEX, obs4MIPs, etc) that are available on the IPSL ESGF node. In order to let scientists perform analysis of the models without downloading vast amount of data the Web Processing Services (WPS) were installed at IPSL compute node. The work is part of the CONVERGENCE project founded by French National Research Agency (ANR). PyWPS implementation of the Web processing Service standard from Open Geospatial Consortium (OGC) in the framework of birdhouse software is used. The processes could be run by user remotely through web-based WPS client or by using command-line tool. All the calculations are performed on the server side close to the data. If the models/observations are not available at IPSL it will be downloaded and cached by WPS process from ESGF network using synda tool. The outputs of the WPS processes are available for download as plots, tar-archives or as NetCDF files. We present the architecture of WPS at IPSL along with the processes for evaluation of the model performance, on-site diagnostics and post-analysis processing of the models output, e.g.: - regriding/interpolation/aggregation - ocgis (OpenClimateGIS) based polygon subsetting of the data - average seasonal cycle, multimodel mean, multimodel mean bias - calculation of the climate indices with icclim library (CERFACS) - atmospheric modes of variability In order to evaluate performance of any new model, once it became available in ESGF, we implement WPS with several model diagnostics and performance metrics calculated using ESMValTool (Eyring et al., GMDD 2015). As a further step we are developing new WPS processes and core-functions to be implemented at ISPL ESGF compute node following the scientific community needs.
Multisite Evaluation of a Data Quality Tool for Patient-Level Clinical Data Sets
Huser, Vojtech; DeFalco, Frank J.; Schuemie, Martijn; Ryan, Patrick B.; Shang, Ning; Velez, Mark; Park, Rae Woong; Boyce, Richard D.; Duke, Jon; Khare, Ritu; Utidjian, Levon; Bailey, Charles
2016-01-01
Introduction: Data quality and fitness for analysis are crucial if outputs of analyses of electronic health record data or administrative claims data should be trusted by the public and the research community. Methods: We describe a data quality analysis tool (called Achilles Heel) developed by the Observational Health Data Sciences and Informatics Collaborative (OHDSI) and compare outputs from this tool as it was applied to 24 large healthcare datasets across seven different organizations. Results: We highlight 12 data quality rules that identified issues in at least 10 of the 24 datasets and provide a full set of 71 rules identified in at least one dataset. Achilles Heel is a freely available software that provides a useful starter set of data quality rules with the ability to add additional rules. We also present results of a structured email-based interview of all participating sites that collected qualitative comments about the value of Achilles Heel for data quality evaluation. Discussion: Our analysis represents the first comparison of outputs from a data quality tool that implements a fixed (but extensible) set of data quality rules. Thanks to a common data model, we were able to compare quickly multiple datasets originating from several countries in America, Europe and Asia. PMID:28154833
NASA Astrophysics Data System (ADS)
Shibuo, Yoshihiro; Ikoma, Eiji; Lawford, Peter; Oyanagi, Misa; Kanauchi, Shizu; Koudelova, Petra; Kitsuregawa, Masaru; Koike, Toshio
2014-05-01
While availability of hydrological- and hydrometeorological data shows growing tendency and advanced modeling techniques are emerging, such newly available data and advanced models may not always be applied in the field of decision-making. In this study we present an integrated system of ensemble streamflow forecast (ESP) and virtual dam simulator, which is designed to support river and dam manager's decision making. The system consists of three main functions: real time hydrological model, ESP model, and dam simulator model. In the real time model, the system simulates current condition of river basins, such as soil moisture and river discharges, using LSM coupled distributed hydrological model. The ESP model takes initial condition from the real time model's output and generates ESP, based on numerical weather prediction. The dam simulator model provides virtual dam operation and users can experience impact of dam control on remaining reservoir volume and downstream flood under the anticipated flood forecast. Thus the river and dam managers shall be able to evaluate benefit of priori dam release and flood risk reduction at the same time, on real time basis. Furthermore the system has been developed under the concept of data and models integration, and it is coupled with Data Integration and Analysis System (DIAS) - a Japanese national project for integrating and analyzing massive amount of observational and model data. Therefore it has advantage in direct use of miscellaneous data from point/radar-derived observation, numerical weather prediction output, to satellite imagery stored in data archive. Output of the system is accessible over the web interface, making information available with relative ease, e.g. from ordinary PC to mobile devices. We have been applying the system to the Upper Tone region, located northwest from Tokyo metropolitan area, and we show application example of the system in recent flood events caused by typhoons.
Assessment of the Effect of Climate Change on Grain Yields in China
NASA Astrophysics Data System (ADS)
Chou, J.
2006-12-01
The paper elaborates the social background and research background; makes clear what the key scientific issues need to be resolved and where the difficulties are. In the research area of parasailing the grain yield change caused by climate change, massive works have been done both in the domestic and in the foreign. It is our upcoming work to evaluate how our countrywide climate change information provided by this pattern influence our economic and social development; and how to make related policies and countermeasures. the main idea in this paper is that the grain yield change is by no means the linear composition of social economy function effect and the climatic change function effect. This paper identifies the economic evaluation object, proposes one new concept - climate change output. The grain yields change affected by the social factors and the climatic change working together. Climate change influences the grain yields by the non ¨C linear function from both climate change and social factor changes, not only by climate change itself. Therefore, in my paper, the appraisal object is defined as: The social factors change based on actual social changing situations; under the two kinds of climate change situation, the invariable climate change situation and variable climate change situation; the difference of grain yield outputs is called " climate change output ", In order to solve this problem, we propose a method to analyze and imitate on the historical materials. Giving the condition that the climate is invariable, the social economic factor changes cause the grain yield change. However, this grain yield change is a tentative quantity index, not an actual quantity number. So we use the existing historical materials to exam the climate change output, based on the characteristic that social factor changes greater in year than in age, but the climate factor changes greater in age than in year. The paper proposes and establishes one economy - climate model (C-D-C model) to appraise the grain yield change caused by the climatic change. Also the preliminary test on this model has been done. In selection of the appraisal methods, we take the C-D production function model, which has been proved more mature in the economic research, as our fundamental model. Then, we introduce climate index (arid index) to the C-D model to develop one new model. This new model utilizes the climatic change factor in the economical model to appraise how the climatic change influence the grain yield change. The new way of appraise should have the better application prospect. The economy - climate model (The C-D-C model) has been applied on the eight Chinese regions that we divide; it has been proved satisfactory in its feasibility, rationality and the application prospect. So we can provide the theoretical fundamentals for policy-making under the more complex and uncertain climate change. Therefore, we open a new possible channel for the global climate change research moving toward the actual social, economic life.
NASA Astrophysics Data System (ADS)
Andrews, A. E.; Kort, E.; Hirsch, A.; Eluszkiewicz, J.; Nehrkorn, T.; Michalak, A. M.; Petron, G.; Frost, G. J.; Gurney, K. R.; Stohl, A.; Wofsy, S. C.; Angevine, W. M.; White, A. B.; Oltmans, S. J.; Montzka, S. A.; Tans, P. P.
2008-12-01
The NOAA Earth System Research Laboratory has been measuring CO2, CO and basic meteorology from a television transmitter tower outside of Waco, TX since 2001. Sample intakes are located at 30, 122 and 457 meters above ground level. From July through November 2006, O3 measurements were added at 9 and 457 magl to support the Texas Air Quality Study (TexAQS 2006). There are several large point sources and metropolitan areas in the vicinity of the tower with distinct chemical signatures. Here, we evaluate the extent to which the Stochastic Time Inverted Lagrangian Transport (STILT) model reproduces pollution events that were observed at the tower during summer and fall 2006. For this study, STILT is driven by customized output from the WRF model v2.2, which was run with a 2km nested grid surrounding the tower embedded in a 10km nest that covers most of the southern and eastern US and a 40km nest that includes all of North America. Inaccurate representation of atmospheric transport is a major source of error in inverse estimates of fluxes of CO2 and other gases, and we selected this period for in depth analysis in part because a dense network of radar profilers was deployed for TexAQS 2006. The radar profilers report wind and boundary layer height, which can be used to evaluate the fidelity of the simulated transport. STILT is a particle dispersion model that can be run either forward or backward in time, which allows us to compare the agreement between forward runs from individual pollution sources and backward runs from the tower. We will also quantitatively compare the STILT-WRF results with similar output from the FLEXPART particle dispersion model driven by high-resolution ECMWF meteorological fields. We will use several different emissions inventories to evaluate model-to-model differences and differences between modeled and observed pollution influences.
Large-Signal Klystron Simulations Using KLSC
NASA Astrophysics Data System (ADS)
Carlsten, B. E.; Ferguson, P.
1997-05-01
We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.
Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less
Climate Model Diagnostic Analyzer
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei
2015-01-01
The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.
Evaluation of the MOCAGE Chemistry Transport Model during the ICARTT/ITOP Experiment
NASA Technical Reports Server (NTRS)
Bousserez, N.; Attie, J. L.; Peuch, V. H.; Michou, M.; Pfister, G.; Edwards, D.; Emmons, L.; Arnold, S.; Heckel, A.; Richter, A.;
2007-01-01
We evaluate the Meteo-France global chemistry transport 3D model MOCAGE (MOdele de Chimie Atmospherique a Grande Echelle) using the important set of aircraft measurements collected during the ICARRT/ITOP experiment. This experiment took place between US and Europe during summer 2004 (July 15-August 15). Four aircraft were involved in this experiment providing a wealth of chemical data in a large area including the North East of US and western Europe. The model outputs are compared to the following species of which concentration is measured by the aircraft: OH, H2O2, CO, NO, NO2, PAN, HNO3, isoprene, ethane, HCHO and O3. Moreover, to complete this evaluation at larger scale, we used also satellite data such as SCIAMACHY NO2 and MOPITT CO. Interestingly, the comprehensive dataset allowed us to evaluate separately the model representation of emissions, transport and chemical processes. Using a daily emission source of biomass burning, we obtain a very good agreement for CO while the evaluation of NO2 points out incertainties resulting from inaccurate ratio of emission factors of NOx/CO. Moreover, the chemical behavior of O3 is satisfactory as discussed in the paper.
Lilley-Walker, Sarah-Jane; Hester, Marianne; Turner, William
2018-03-01
This article is based on a review of 60 evaluations (published and unpublished) relating to European domestic violence perpetrator programmes, involving 7,212 programme participants across 12 countries. The purpose of the review, part of the "IMPACT: Evaluation of European Perpetrator Programmes" project funded by the European Commission (Daphne III Programme), was to provide detailed knowledge about the range of European evaluation studies with particular emphasis on the design, methods, input, output, and outcome measures used in order to identify the possibilities and challenges of a multicountry, Europe-wide evaluation methodology that could be used to assess perpetrator programmes in the future. We provide a model to standardise the reporting of evaluation studies and to ensure attention is paid to what information is being collected at different time points so as to understand what and how the behaviour and attitudes of perpetrators might change throughout the course of the programme.
Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes
2011-01-01
Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-02-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.
Pérez-López, Paula; Montazeri, Mahdokht; Feijoo, Gumersindo; Moreira, María Teresa; Eckelman, Matthew J
2018-06-01
The economic and environmental performance of microalgal processes has been widely analyzed in recent years. However, few studies propose an integrated process-based approach to evaluate economic and environmental indicators simultaneously. Biodiesel is usually the single product and the effect of environmental benefits of co-products obtained in the process is rarely discussed. In addition, there is wide variation of the results due to inherent variability of some parameters as well as different assumptions in the models and limited knowledge about the processes. In this study, two standardized models were combined to provide an integrated simulation tool allowing the simultaneous estimation of economic and environmental indicators from a unique set of input parameters. First, a harmonized scenario was assessed to validate the joint environmental and techno-economic model. The findings were consistent with previous assessments. In a second stage, a Monte Carlo simulation was applied to evaluate the influence of variable and uncertain parameters in the model output, as well as the correlations between the different outputs. The simulation showed a high probability of achieving favorable environmental performance for the evaluated categories and a minimum selling price ranging from $11gal -1 to $106gal -1 . Greenhouse gas emissions and minimum selling price were found to have the strongest positive linear relationship, whereas eutrophication showed weak correlations with the other indicators (namely greenhouse gas emissions, cumulative energy demand and minimum selling price). Process parameters (especially biomass productivity and lipid content) were the main source of variation, whereas uncertainties linked to the characterization methods and economic parameters had limited effect on the results. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jasinski, M. F.; Arsenault, K. R.; Beaudoing, H. K.; Bolten, J. D.; Borak, J.; Kumar, S.; Peters-Lidard, C. D.; Li, B.; Liu, Y.; Mocko, D. M.; Rodell, M.
2014-12-01
An Integrated Terrestrial Water Analysis System, or NCA-LDAS, has been created to enable development, evaluation, and dissemination of terrestrial hydrologic climate indicators focusing on the continental U.S. The purpose is to provide quantifiable indicators of states and estimated trends in our nation's water stores and fluxes over a wide range of scales and locations, to support improved understanding and management of water resources and numerous related sectors such as agriculture and energy. NCA-LDAS relies on improved modeling of terrestrial hydrology through assimilation of satellite imagery, building upon the legacy of the Land Information System modeling framework (Kumar et al, 2006; Peters-Lidard et al, 2007). It currently employs the Noah or Catchment Land Surface Model, run with a number of satellite data assimilation scenarios. The domain for NCA-LDAS is the continental U.S. at 1/8 degree grid for the period 1979 to present. Satellite-based variables that are assimilated are soil moisture and snow water equivalent from principally microwave sensors such as SMMR, SSM/I and AMSR, snow covered area from multispectral sensors such as AVHRR, and MODIS, and terrestrial water storage from GRACE. Once simulated, output are evaluated in comparison to independent datasets using a variety of metrics using the Land Surface Verification Toolkit (LVT). LVT schemes within NCA-LDAS also include routines for computing standard statistics of time series such means, max, and linear trends, at various scales. The dissemination of the NCA-LDAS, including model descriptions, forcings, parameters, daily output, indicator results and LVT tools, have been made available to the public through dissemination on NASA GES-DISC.
Hu, E; Liao, T. W.; Tiersch, T. R.
2013-01-01
Emerging commercial-level technology for aquatic sperm cryopreservation has not been modeled by computer simulation. Commercially available software (ARENA, Rockwell Automation, Inc. Milwaukee, WI) was applied to simulate high-throughput sperm cryopreservation of blue catfish (Ictalurus furcatus) based on existing processing capabilities. The goal was to develop a simulation model suitable for production planning and decision making. The objectives were to: 1) predict the maximum output for 8-hr workday; 2) analyze the bottlenecks within the process, and 3) estimate operational costs when run for daily maximum output. High-throughput cryopreservation was divided into six major steps modeled with time, resources and logic structures. The modeled production processed 18 fish and produced 1164 ± 33 (mean ± SD) 0.5-ml straws containing one billion cryopreserved sperm. Two such production lines could support all hybrid catfish production in the US and 15 such lines could support the entire channel catfish industry if it were to adopt artificial spawning techniques. Evaluations were made to improve efficiency, such as increasing scale, optimizing resources, and eliminating underutilized equipment. This model can serve as a template for other aquatic species and assist decision making in industrial application of aquatic germplasm in aquaculture, stock enhancement, conservation, and biomedical model fishes. PMID:25580079
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
NASA Astrophysics Data System (ADS)
Pervez, M. S.; McNally, A.; Arsenault, K. R.
2017-12-01
Convergence of evidence from different agro-hydrologic sources is particularly important for drought monitoring in data sparse regions. In Africa, a combination of remote sensing and land surface modeling experiments are used to evaluate past, present and future drought conditions. The Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System (FLDAS) routinely simulates daily soil moisture, evapotranspiration (ET) and other variables over Africa using multiple models and inputs. We found that Noah 3.3, Variable Infiltration Capacity (VIC) 4.1.2, and Catchment Land Surface Model based FLDAS simulations of monthly soil moisture percentile maps captured concurrent drought and water surplus episodes effectively over East Africa. However, the results are sensitive to selection of land surface model and hydrometeorological forcings. We seek to identify sources of uncertainty (input, model, parameter) to eventually improve the accuracy of FLDAS outputs. In absence of in situ data, previous work used European Space Agency Climate Change Initiative Soil Moisture (CCI-SM) data measured from merged active-passive microwave remote sensing to evaluate FLDAS soil moisture, and found that during the high rainfall months of April-May and November-December Noah-based soil moisture correlate well with CCI-SM over the Greater Horn of Africa region. We have found good correlations (r>0.6) for FLDAS Noah 3.3 ET anomalies and Operational Simplified Surface Energy Balance (SSEBop) ET over East Africa. Recently, SSEBop ET estimates (version 4) were improved by implementing a land surface temperature correction factor. We re-evaluate the correlations between FLDAS ET and version 4 SSEBop ET. To further investigate the reasons for differences between models we evaluate FLDAS soil moisture with Advanced Scatterometer and SMAP soil moisture and FLDAS outputs with MODIS and AVHRR normalized difference vegetation index. By exploring longer historic time series and near-real time products we will be aiding convergence of evidence for better understanding of historic drought, improved monitoring and forecasting, and better understanding of uncertainties of water availability estimation over Africa
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.
Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima
2017-01-01
We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.
A memory-mapped output interface: Omega navigation output data from the JOLT (TM) microcomputer
NASA Technical Reports Server (NTRS)
Lilley, R. W.
1976-01-01
A hardware interface which allows both digital and analog data output from the JOLT microcomputer is described in the context of a software-based Omega Navigation receiver. The interface hardware described is designed for output of six (or eight with simple extensions) bits of binary output in response to a memory store command from the microcomputer. The interface was produced in breadboard form and is operational as an evaluation aid for the software Omega receiver.
Supermodeling by Synchronization of Alternative SPEEDO Models
NASA Astrophysics Data System (ADS)
Duane, Gregory; Selten, Frank
2016-04-01
The supermodeling approach, wherein different imperfect models of the same objective process are dynamically combined in run-time to reduce systematic error, is tested using SPEEDO - a primitive equation atmospheric model coupled to the CLIO ocean model. Three versions of SPEEDO are defined by parameters that differ in a range that arguably mimics differences among state-of-the-art climate models. A fourth model is taken to represent truth. The "true" ocean drives all three model atmospheres. The three models are also connected to one another at every level, with spatially uniform nudging coefficients that are trained so that the three models, which synchronize with one another, also synchronize with truth when data is continuously assimilated, as in weather prediction. The SPEEDO supermodel is evaluated in weather-prediction mode, with nudging to truth. It is found that the supemodel performs better than any of the three models and marginally better than the best weighted average of the outputs of the three models run separately. To evaluate the utility for climate projection, parameters corresponding to green house gas levels are changed in truth and in the three models. The supermodel formed with inter-model connections from the present-CO2 runs no longer give the optimal configuration for the supermodel in the doubled-CO2 realm, but the supermodel with the previously trained connections is still useful as compared to the separate models or averages of their outputs. In ongoing work, a training algorithm is examined that attempts to match the blocked-zonal index cycle of the SPEEDO model atmosphere to truth, rather than simply minimizing the RMS error in the various fields. Such an approach comes closer to matching the model attractor to the true attractor - the desired effect in climate projection - rather than matching instantaneous states. Gradient descent in a cost function defined over a finite temporal window can indeed be done efficiently. Preliminary results are presented for a crudely defined index cycle.
Using a data base management system for modelling SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, K.
1985-01-01
The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.
Evaluation of Wind Energy Production in Texas using Geographic Information Systems (GIS)
NASA Astrophysics Data System (ADS)
Ferrer, L. M.
2017-12-01
Texas has the highest installed wind capacity in the United States. The purpose of this research was to estimate the theoretical wind turbine energy production and the utilization ratio of wind turbines in Texas. Windfarm data was combined applying Geographic Information System (GIS) methodology to create an updated GIS wind turbine database, including location and technical specifications. Applying GIS diverse tools, the windfarm data was spatially joined with National Renewable Energy Laboratory (NREL) wind data to calculate the wind speed at each turbine hub. The power output for each turbine at the hub wind speed was evaluated by the GIS system according the respective turbine model power curve. In total over 11,700 turbines are installed in Texas with an estimated energy output of 60 GWh per year and an average utilization ratio of 0.32. This research indicates that applying GIS methodologies will be crucial in the growth of wind energy and efficiency in Texas.
Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs
2014-06-01
comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree