2017-05-01
ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity
Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.
Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C
2010-01-01
Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Parker, C. D.; Tommerdahl, J. B.
1972-01-01
The instrumentation requirements for a regenerative life support systems were studied to provide the earliest possible indication of a malfunction that will permit degradation of the environment. Four categories of parameters were investigated: environmental parameters that directly and immediately influence the health and safety of the cabin crew; subsystems' inputs to the cabin that directly maintain the cabin environmental parameters; indications for maintenance or repair; and parameters useful as diagnostic indicators. A data averager concept is introduced which provides a moving average of parameter values that is not influenced by spurious changes, and is convenient for detecting parameter rates of change. A system is included to provide alarms at preselected parameter levels.
Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.
Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark
2008-04-01
To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ...
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ev...
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Department of Defense meteorological and environmental inputs to aviation systems
NASA Technical Reports Server (NTRS)
Try, P. D.
1983-01-01
Recommendations based on need, cost, and achievement of flight safety are offered, and the re-evaluation of weather parameters needed for safe landing operations that lead to reliable and consistent automated observation capabilities are considered.
MOVES2010a regional level sensitivity analysis
DOT National Transportation Integrated Search
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Parameterisation of Biome BGC to assess forest ecosystems in Africa
NASA Astrophysics Data System (ADS)
Gautam, Sishir; Pietsch, Stephan A.
2010-05-01
African forest ecosystems are an important environmental and economic resource. Several studies show that tropical forests are critical to society as economic, environmental and societal resources. Tropical forests are carbon dense and thus play a key role in climate change mitigation. Unfortunately, the response of tropical forests to environmental change is largely unknown owing to insufficient spatially extensive observations. Developing regions like Africa where records of forest management for long periods are unavailable the process-based ecosystem simulation model - BIOME BGC could be a suitable tool to explain forest ecosystem dynamics. This ecosystem simulation model uses descriptive input parameters to establish the physiology, biochemistry, structure, and allocation patterns within vegetation functional types, or biomes. Undocumented parameters for larger-resolution simulations are currently the major limitations to regional modelling in African forest ecosystems. This study was conducted to document input parameters for BIOME-BGC for major natural tropical forests in the Congo basin. Based on available literature and field measurements updated values for turnover and mortality, allometry, carbon to nitrogen ratios, allocation of plant material to labile, cellulose, and lignin pools, tree morphology and other relevant factors were assigned. Daily climate input data for the model applications were generated using the statistical weather generator MarkSim. The forest was inventoried at various sites and soil samples of corresponding stands across Gabon were collected. Carbon and nitrogen in the collected soil samples were determined from soil analysis. The observed tree volume, soil carbon and soil nitrogen were then compared with the simulated model outputs to evaluate the model performance. Furthermore, the simulation using Congo Basin specific parameters and generalised BIOME BGC parameters for tropical evergreen broadleaved tree species were also executed and the simulated results compared. Once the model was optimised for forests in the Congo basin it was validated against observed tree volume, soil carbon and soil nitrogen from a set of independent plots.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
Combined PEST and Trial-Error approach to improve APEX calibration
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy Environmental eXtender (APEX), a physically-based hydrologic model that simulates management impacts on the environment for small watersheds, requires improved understanding of the input parameters for improved simulations. However, most previously published studies used the ...
Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems
Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...
Watershed scale rainfall‐runoff models are used for environmental management and regulatory modeling applications, but their effectiveness are limited by predictive uncertainties associated with model input data. This study evaluated the effect of temporal and spatial rainfall re...
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Padilla, Lauren; Winchell, Michael; Peranginangin, Natalia; Grant, Shanique
2017-11-01
Wheat crops and the major wheat-growing regions of the United States are not included in the 6 crop- and region-specific scenarios developed by the US Environmental Protection Agency (USEPA) for exposure modeling with the Pesticide Root Zone Model conceptualized for groundwater (PRZM-GW). The present work augments the current scenarios by defining appropriately vulnerable PRZM-GW scenarios for high-producing spring and winter wheat-growing regions that are appropriate for use in refined pesticide exposure assessments. Initial screening-level modeling was conducted for all wheat areas across the conterminous United States as defined by multiple years of the Cropland Data Layer land-use data set. Soil, weather, groundwater temperature, evaporation depth, and crop growth and management practices were characterized for each wheat area from publicly and nationally available data sets and converted to input parameters for PRZM. Approximately 150 000 unique combinations of weather, soil, and input parameters were simulated with PRZM for an herbicide applied for postemergence weed control in wheat. The resulting postbreakthrough average herbicide concentrations in a theoretical shallow aquifer were ranked to identify states with the largest regions of relatively vulnerable wheat areas. For these states, input parameters resulting in near 90 th percentile postbreakthrough average concentrations corresponding to significant wheat areas with shallow depth to groundwater formed the basis for 4 new spring wheat scenarios and 4 new winter wheat scenarios to be used in PRZM-GW simulations. Spring wheat scenarios were identified in North Dakota, Montana, Washington, and Texas. Winter wheat scenarios were identified in Oklahoma, Texas, Kansas, and Colorado. Compared to the USEPA's original 6 scenarios, postbreakthrough average herbicide concentrations in the new scenarios were lower than all but Florida Potato and Georgia Coastal Peanuts of the original scenarios and better represented regions dominated by wheat crops. Integr Environ Assess Manag 2017;13:992-1006. © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
USDA-ARS?s Scientific Manuscript database
Irradiance, CO2, and temperature are critical inputs for photosynthesis and crop growth. They are also environmental parameters which growers can control in protected horticulture production systems. We evaluated the photosynthetic response of 13 herbaceous ornamentals (Begonia × hiemalis, Begonia...
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Langlois, C; Simon, L; Lécuyer, Ch
2003-12-01
A time-dependent box model is developed to calculate oxygen isotope compositions of bone phosphate as a function of environmental and physiological parameters. Input and output oxygen fluxes related to body water and bone reservoirs are scaled to the body mass. The oxygen fluxes are evaluated by stoichiometric scaling to the calcium accretion and resorption rates, assuming a pure hydroxylapatite composition for the bone and tooth mineral. The model shows how the diet composition, body mass, ambient relative humidity and temperature may control the oxygen isotope composition of bone phosphate. The model also computes how bones and teeth record short-term variations in relative humidity, air temperature and delta18O of drinking water, depending on body mass. The documented diversity of oxygen isotope fractionation equations for vertebrates is accounted for by our model when for each specimen the physiological and diet parameters are adjusted in the living range of environmental conditions.
Multi-criteria evaluation of wastewater treatment plant control strategies under uncertainty.
Flores-Alsina, Xavier; Rodríguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2008-11-01
The evaluation of activated sludge control strategies in wastewater treatment plants (WWTP) via mathematical modelling is a complex activity because several objectives; e.g. economic, environmental, technical and legal; must be taken into account at the same time, i.e. the evaluation of the alternatives is a multi-criteria problem. Activated sludge models are not well characterized and some of the parameters can present uncertainty, e.g. the influent fractions arriving to the facility and the effect of either temperature or toxic compounds on the kinetic parameters, having a strong influence in the model predictions used during the evaluation of the alternatives and affecting the resulting rank of preferences. Using a simplified version of the IWA Benchmark Simulation Model No. 2 as a case study, this article shows the variations in the decision making when the uncertainty in activated sludge model (ASM) parameters is either included or not during the evaluation of WWTP control strategies. This paper comprises two main sections. Firstly, there is the evaluation of six WWTP control strategies using multi-criteria decision analysis setting the ASM parameters at their default value. In the following section, the uncertainty is introduced, i.e. input uncertainty, which is characterized by probability distribution functions based on the available process knowledge. Next, Monte Carlo simulations are run to propagate input through the model and affect the different outcomes. Thus (i) the variation in the overall degree of satisfaction of the control objectives for the generated WWTP control strategies is quantified, (ii) the contributions of environmental, legal, technical and economic objectives to the existing variance are identified and finally (iii) the influence of the relative importance of the control objectives during the selection of alternatives is analyzed. The results show that the control strategies with an external carbon source reduce the output uncertainty in the criteria used to quantify the degree of satisfaction of environmental, technical and legal objectives, but increasing the economical costs and their variability as a trade-off. Also, it is shown how a preliminary selected alternative with cascade ammonium controller becomes less desirable when input uncertainty is included, having simpler alternatives more chance of success.
1982-11-01
band, due to higher molecular absorption by water vapor in the 8-12 um band. On the other hand, aerosol extinction may affect the shorter wavelenghts ...precipitation, and aerosol growth . While relative humidity is a LOWTRAN 5 model input, single parameter sensitivity analysis indicates that this fact alone does...M.J., and Vaklyes, D.W., Comparison of Canadian and German Weather, Systems Planning Corporation SPP 566, March 1980. 13. Atwater, M.A., and Ball
Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa
2017-12-01
In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for processing of green tea, oolong tea, and black tea were calculated as 58,182, 60,947, and 66,301 MJ per ton of dry tea, respectively.
A thermal vacuum test optimization procedure
NASA Technical Reports Server (NTRS)
Kruger, R.; Norris, H. P.
1979-01-01
An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.
Development and weighting of a life cycle assessment screening model
NASA Astrophysics Data System (ADS)
Bates, Wayne E.; O'Shaughnessy, James; Johnson, Sharon A.; Sisson, Richard
2004-02-01
Nearly all life cycle assessment tools available today are high priced, comprehensive and quantitative models requiring a significant amount of data collection and data input. In addition, most of the available software packages require a great deal of training time to learn how to operate the model software. Even after this time investment, results are not guaranteed because of the number of estimations and assumptions often necessary to run the model. As a result, product development, design teams and environmental specialists need a simplified tool that will allow for the qualitative evaluation and "screening" of various design options. This paper presents the development and design of a generic, qualitative life cycle screening model and demonstrates its applicability and ease of use. The model uses qualitative environmental, health and safety factors, based on site or product-specific issues, to sensitize the overall results for a given set of conditions. The paper also evaluates the impact of different population input ranking values on model output. The final analysis is based on site or product-specific variables. The user can then evaluate various design changes and the apparent impact or improvement on the environment, health and safety, compliance cost and overall corporate liability. Major input parameters can be varied, and factors such as materials use, pollution prevention, waste minimization, worker safety, product life, environmental impacts, return of investment, and recycle are evaluated. The flexibility of the model format will be discussed in order to demonstrate the applicability and usefulness within nearly any industry sector. Finally, an example using audience input value scores will be compared to other population input results.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-25
... information to the NRC to demonstrate that the Building 1103A Area meets the criteria in Subpart E of 10 CFR... Licensee conducted site-specific dose modeling using input parameters specific to the Building 1103A area... AGENCY: Nuclear Regulatory Commission. ACTION: Notice of availability. FOR FURTHER INFORMATION CONTACT...
Water Energy Simulation Toolset
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Thuy; Jeffers, Robert
The Water-Energy Simulation Toolset (WEST) is an interactive simulation model that helps visualize impacts of different stakeholders on water quantity and quality of a watershed. The case study is applied for the Snake River Basin with the fictional name Cutthroat River Basin. There are four groups of stakeholders of interest: hydropower, agriculture, flood control, and environmental protection. Currently, the quality component depicts nitrogen-nitrate contaminant. Users can easily interact with the model by changing certain inputs (climate change, fertilizer inputs, etc.) to observe the change over the entire system. Users can also change certain parameters to test their management policy.
Pérez-López, Paula; Montazeri, Mahdokht; Feijoo, Gumersindo; Moreira, María Teresa; Eckelman, Matthew J
2018-06-01
The economic and environmental performance of microalgal processes has been widely analyzed in recent years. However, few studies propose an integrated process-based approach to evaluate economic and environmental indicators simultaneously. Biodiesel is usually the single product and the effect of environmental benefits of co-products obtained in the process is rarely discussed. In addition, there is wide variation of the results due to inherent variability of some parameters as well as different assumptions in the models and limited knowledge about the processes. In this study, two standardized models were combined to provide an integrated simulation tool allowing the simultaneous estimation of economic and environmental indicators from a unique set of input parameters. First, a harmonized scenario was assessed to validate the joint environmental and techno-economic model. The findings were consistent with previous assessments. In a second stage, a Monte Carlo simulation was applied to evaluate the influence of variable and uncertain parameters in the model output, as well as the correlations between the different outputs. The simulation showed a high probability of achieving favorable environmental performance for the evaluated categories and a minimum selling price ranging from $11gal -1 to $106gal -1 . Greenhouse gas emissions and minimum selling price were found to have the strongest positive linear relationship, whereas eutrophication showed weak correlations with the other indicators (namely greenhouse gas emissions, cumulative energy demand and minimum selling price). Process parameters (especially biomass productivity and lipid content) were the main source of variation, whereas uncertainties linked to the characterization methods and economic parameters had limited effect on the results. Copyright © 2018 Elsevier B.V. All rights reserved.
Predicted carbonation of existing concrete building based on the Indonesian tropical micro-climate
NASA Astrophysics Data System (ADS)
Hilmy, M.; Prabowo, H.
2018-03-01
This paper is aimed to predict the carbonation progress based on the previous mathematical model. It shortly explains the nature of carbonation including the processes and effects. Environmental humidity and temperature of the existing concrete building are measured and compared to data from local Meteorological, Climatological, and Geophysical Agency. The data gained are expressed in the form of annual hygrothermal values which will use as the input parameter in carbonation model. The physical properties of the observed building such as its location, dimensions, and structural material used are quantified. These data then utilized as an important input parameter for carbonation coefficients. The relationships between relative humidity and the rate of carbonation established. The results can provide a basis for repair and maintenance of existing concrete buildings and the sake of service life analysis of them.
Mapping the Risks of Malaria, Dengue and Influenza Using Satellite Data
NASA Astrophysics Data System (ADS)
Kiang, R. K.; Soebiyanto, R. P.
2012-07-01
It has long been recognized that environment and climate may affect the transmission of infectious diseases. The effects are most obvious for vector-borne infectious diseases, such as malaria and dengue, but less so for airborne and contact diseases, such as seasonal influenza. In this paper, we examined the meteorological and environmental parameters that influence the transmission of malaria, dengue and seasonal influenza. Remotely sensed parameters that provide such parameters were discussed. Both statistical and biologically inspired, processed based models can be used to model the transmission of these diseases utilizing the remotely sensed parameters as input. Examples were given for modelling malaria in Thailand, dengue in Indonesia, and seasonal influenza in Hong Kong.
NASA Astrophysics Data System (ADS)
Wang, Guodong; Dong, Shuanglin; Tian, Xiangli; Gao, Qinfeng; Wang, Fang
2015-06-01
Emergy analysis is effective for analyzing ecological economic systems. However, the accuracy of the approach is affected by the diversity of economic level, meteorological and hydrological parameters in different regions. The present study evaluated the economic benefits, environmental impact, and sustainability of indoor, semi-intensive and extensive farming systems of sea cucumber ( Apostichopus japonicus) in the same region. The results showed that A. japonicus indoor farming system was high in input and output (yield) whereas pond extensive farming system was low in input and output. The output/input ratio of indoor farming system was lower than that of pond extensive farming system, and the output/input ratio of semi-intensive farming system fell in between them. The environmental loading ratio of A. japonicus extensive farming system was lower than that of indoor farming system. In addition, the emergy yield and emergy exchange ratios, and emergy sustainability and emergy indexes for sustainable development were higher in extensive farming system than those in indoor farming system. These results indicated that the current extensive farming system exerted fewer negative influences on the environment, made more efficient use of available resources, and met more sustainable development requirements than the indoor farming system. A. japonicus farming systems showed more emergy benefits than fish farming systems. The pond farming systems of A. japonicus exploited more free local environmental resources for production, caused less potential pressure on the local environment, and achieved higher sustainability than indoor farming system.
Atmospheric radiance interpolation for the modeling of hyperspectral data
NASA Astrophysics Data System (ADS)
Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony
2008-04-01
The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.
Environmental Impact of Buildings--What Matters?
Heeren, Niko; Mutel, Christopher L; Steubing, Bernhard; Ostermeyer, York; Wallbaum, Holger; Hellweg, Stefanie
2015-08-18
The goal of this study was to identify drivers of environmental impact and quantify their influence on the environmental performance of wooden and massive residential and office buildings. We performed a life cycle assessment and used thermal simulation to quantify operational energy demand and to account for differences in thermal inertia of building mass. Twenty-eight input parameters, affecting operation, design, material, and exogenic building properties were sampled in a Monte Carlo analysis. To determine sensitivity, we calculated the correlation between each parameter and the resulting life cycle inventory and impact assessment scores. Parameters affecting operational energy demand and energy conversion are the most influential for the building's total environmental performance. For climate change, electricity mix, ventilation rate, heating system, and construction material rank the highest. Thermal inertia results in an average 2-6% difference in heat demand. Nonrenewable cumulative energy demand of wooden buildings is 18% lower, compared to a massive variant. Total cumulative energy demand is comparable. The median climate change impact is 25% lower, including end-of-life material credits and 22% lower, when credits are excluded. The findings are valid for small offices and residential buildings in Switzerland and regions with similar building culture, construction material production, and climate.
A model of objective weighting for EIA.
Ying, L G; Liu, Y C
1995-06-01
In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.
2012-08-01
calculation of the erosion rate is based on the United States Department of Agriculture (USDA) Universal Soil Loss Equation ( USLE ). ERDC/EL TR-12-16 147...to specifying the USLE input parameters, the user must select which method to use for computing the soil loss type (i.e., “SDR,” or “Without SDR...34 Soil Model
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
NASA Technical Reports Server (NTRS)
Khorram, S.; Smith, H. G.
1979-01-01
A remote sensing-aided procedure was applied to the watershed-wide estimation of water loss to the atmosphere (evapotranspiration, ET). The approach involved a spatially referenced databank based on both remotely sensed and ground-acquired information. Physical models for both estimation of ET and quantification of input parameters are specified, and results of the investigation are outlined.
Prioritizing Risks and Uncertainties from Intentional Release of Selected Category A Pathogens
Hong, Tao; Gurian, Patrick L.; Huang, Yin; Haas, Charles N.
2012-01-01
This paper synthesizes available information on five Category A pathogens (Bacillus anthracis, Yersinia pestis, Francisella tularensis, Variola major and Lassa) to develop quantitative guidelines for how environmental pathogen concentrations may be related to human health risk in an indoor environment. An integrated model of environmental transport and human health exposure to biological pathogens is constructed which 1) includes the effects of environmental attenuation, 2) considers fomite contact exposure as well as inhalational exposure, and 3) includes an uncertainty analysis to identify key input uncertainties, which may inform future research directions. The findings provide a framework for developing the many different environmental standards that are needed for making risk-informed response decisions, such as when prophylactic antibiotics should be distributed, and whether or not a contaminated area should be cleaned up. The approach is based on the assumption of uniform mixing in environmental compartments and is thus applicable to areas sufficiently removed in time and space from the initial release that mixing has produced relatively uniform concentrations. Results indicate that when pathogens are released into the air, risk from inhalation is the main component of the overall risk, while risk from ingestion (dermal contact for B. anthracis) is the main component of the overall risk when pathogens are present on surfaces. Concentrations sampled from untracked floor, walls and the filter of heating ventilation and air conditioning (HVAC) system are proposed as indicators of previous exposure risk, while samples taken from touched surfaces are proposed as indicators of future risk if the building is reoccupied. A Monte Carlo uncertainty analysis is conducted and input-output correlations used to identify important parameter uncertainties. An approach is proposed for integrating these quantitative assessments of parameter uncertainty with broader, qualitative considerations to identify future research priorities. PMID:22412915
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
NASA Astrophysics Data System (ADS)
Kuo, L.; Louchouarn, P.; Herbert, B.
2008-12-01
Chars/charcoals are solid combustion residues derived from biomass burning. They represent one of the major classes in the pyrogenic organic residues, the so-called black carbon (BC), and have highly heterogeneous nature due to the highly variable combustion conditions during biomass burning. More and more attention has been given to characterize and quantify the inputs of charcoals to different environmental compartments since they also share the common features of BC, such as recalcitrant nature and strong sorption capacity on hydrophobic organic pollutants. Moreover, such inputs also imply the thermal alteration of terrestrial organic matter, as well as corresponding biomarkers such as lignin. Lignin is considered to be among the best-preserved components of vascular plants after deposition, due to its relative stability on biodegradation. This macropolymer is an important contributor to soil organic matter (SOM) and its presence in aquatic environments helps trace the input of terrigenous organic matter to such systems. The yields and specific ratios of lignin oxidation products (LOP) from alkaline cupric oxide (CuO) oxidation method have been extensively used to identify the structure of plant lignin and estimate inputs of plant carbon to soils and aquatic systems, as well as evaluate the diagenetic status of lignin. Although the fate of lignin under microbiological and photochemical degradation pathways have been thoroughly addressed in the literature, studies assessing the impact of thermal degradation on lignin structure and signature are scarce. In the present study, we used three suites of lab-made chars (honey mesquite, cordgrass, and loblolly pine) to study the impact of combustion on lignin and their commonly used parameters. Our results show that combustion can greatly decrease the yields of the eight major lignin phenols (vanillyl, syringyl, and cinnamyl phenols) with no lignin phenols detected in any synthetic char produced at ≥ 400°C. With increasing combustion temperature, internal phenol ratios (S/V and C/V) show a two-stage change with an initial increase at low temperatures followed by marked and rapid decreases when temperatures reach 200- 250°C. The acid/aldehyde ratios of vanillyl phenols ((Ad/Al)v) and syringyl phenols ((Ad/Al)s) all increase with increasing combustion temperature and duration and reach a maximum values at 300- 350°C, regardless plant species. The highly elevated acid/aldehyde ratios reached in some cases exceed the reported values of humic and fulvic acids extracted from soils and sediments. We applied these empirical data in mixing models to estimate the potential effects of charcoal inputs on the observed lignin signatures in environmental mixtures. The shifts in lignin signatures are strongly influenced both by the characteristics of the charcoal incorporated and the proportion of charcoal in the mixture. We validated our observations with two sets of environmental samples, including soils from control burning sites, and a sediment core from a wetland with evidence of charcoal inputs, showing that the presences of charcoals do alter the observed lignin signals in these samples. Such a thermal "interference" on lignin parameters should thus be considered in environmental mixtures with recognized char input.
Zhu, Ying; Price, Oliver R; Tao, Shu; Jones, Kevin C; Sweetman, Andy J
2014-08-01
We present a new multimedia chemical fate model (SESAMe) which was developed to assess chemical fate and behaviour across China. We apply the model to quantify the influence of environmental parameters on chemical overall persistence (POV) and long-range transport potential (LRTP) in China, which has extreme diversity in environmental conditions. Sobol sensitivity analysis was used to identify the relative importance of input parameters. Physicochemical properties were identified as more influential than environmental parameters on model output. Interactive effects of environmental parameters on POV and LRTP occur mainly in combination with chemical properties. Hypothetical chemicals and emission data were used to model POV and LRTP for neutral and acidic chemicals with different KOW/DOW, vapour pressure and pKa under different precipitation, wind speed, temperature and soil organic carbon contents (fOC). Generally for POV, precipitation was more influential than the other environmental parameters, whilst temperature and wind speed did not contribute significantly to POV variation; for LRTP, wind speed was more influential than the other environmental parameters, whilst the effects of other environmental parameters relied on specific chemical properties. fOC had a slight effect on POV and LRTP, and higher fOC always increased POV and decreased LRTP. Example case studies were performed on real test chemicals using SESAMe to explore the spatial variability of model output and how environmental properties affect POV and LRTP. Dibenzofuran released to multiple media had higher POV in northwest of Xinjiang, part of Gansu, northeast of Inner Mongolia, Heilongjiang and Jilin. Benzo[a]pyrene released to the air had higher LRTP in south Xinjiang and west Inner Mongolia, whilst acenaphthene had higher LRTP in Tibet and west Inner Mongolia. TCS released into water had higher LRTP in Yellow River and Yangtze River catchments. The initial case studies demonstrated that SESAMe performed well on comparing POV and LRTP of chemicals in different regions across China in order to potentially identify the most sensitive regions. This model should not only be used to estimate POV and LRTP for screening and risk assessments of chemicals, but could potentially be used to help design chemical monitoring programmes across China in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The USEPA Office of Pesticide Programs (OPP) reviewed most of its human and ecological exposure assessment models for conventional pesticides to evaluate which inputs and parameters may be affected by changing climate conditions. To illustrate the approach used for considering potential effects of c...
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, A.T.; Khanna, R.
1996-11-01
Diverse environmental reviews and approvals are required by both Government and non-government organizations (NGOs) for licensing or permitting of major thermal power plants in Asia; specifically, India and Philippines. The number and type of approvals required for a specific project vary depending on site characteristics, fuel source, project-specific design and operating parameters as well as type of project financing. A model 400 MW coal-fired project located in Asia is presented to illustrate the various lender and host country environmental guidelines. A case study of the environmental reviews and approvals for Ogden Quezon Power, Inc. Project (Quezon Province, Republic of themore » Philippines) is also included. A list of acronyms is provided at the paper`s end. As independent power project (IPP) developers seek financing for these capital-intensive infrastructure projects, a number of international finance/lending institutions are likely to become involved. Each lender considers different environmental aspects of a project. This paper compares relevant environmental requirements of various lenders which finance IPPs and their interest in a project`s environmental review. Finally, the authors of this paper believe that the environmental review process can bring together many parties involved with IPP development, including local and central governments, non government organizations, various lenders (such as multilateral and export credit agencies) as well as project proponents. Environmental review provides input opportunity for interested and affected parties. Airing environmental issues in open forums such as public hearings or meetings helps ensure projects are not evaluated without public input.« less
NASA Technical Reports Server (NTRS)
Sarma, Garimella R.; Barranger, John P.
1992-01-01
The analysis and prototype results of a dual-amplifier circuit for measuring blade-tip clearance in turbine engines are presented. The capacitance between the blade tip and mounted capacitance electrode within a guard ring of a probe forms one of the feedback elements of an operational amplifier (op amp). The differential equation governing the circuit taking into consideration the nonideal features of the op amp was formulated and solved for two types of inputs (ramp and dc) that are of interest for the application. Under certain time-dependent constraints, it is shown that (1) with a ramp input the circuit has an output voltage proportional to the static tip clearance capacitance, and (2) with a dc input, the output is proportional to the derivative of the clearance capacitance, and subsequent integration recovers the dynamic capacitance. The technique accommodates long cable lengths and environmentally induced changes in cable and probe parameters. System implementation for both static and dynamic measurements having the same high sensitivity is also presented.
NASA Astrophysics Data System (ADS)
Sarma, Garimella R.; Barranger, John P.
1992-10-01
The analysis and prototype results of a dual-amplifier circuit for measuring blade-tip clearance in turbine engines are presented. The capacitance between the blade tip and mounted capacitance electrode within a guard ring of a probe forms one of the feedback elements of an operational amplifier (op amp). The differential equation governing the circuit taking into consideration the nonideal features of the op amp was formulated and solved for two types of inputs (ramp and dc) that are of interest for the application. Under certain time-dependent constraints, it is shown that (1) with a ramp input the circuit has an output voltage proportional to the static tip clearance capacitance, and (2) with a dc input, the output is proportional to the derivative of the clearance capacitance, and subsequent integration recovers the dynamic capacitance. The technique accommodates long cable lengths and environmentally induced changes in cable and probe parameters. System implementation for both static and dynamic measurements having the same high sensitivity is also presented.
Animal population dynamics: Identification of critical components
Emlen, J.M.; Pikitch, E.K.
1989-01-01
There is a growing interest in the use of population dynamics models in environmental risk assessment and the promulgation of environmental regulatory policies. Unfortunately, because of species and areal differences in the physical and biotic influences on population dynamics, such models must almost inevitably be both complex and species- or site-specific. Given the emormous variety of species and sites of potential concern, this fact presents a problem; it simply is not possible to construct models for all species and circumstances. Therefore, it is useful, before building predictive population models, to discover what input parameters are of critical importance to the desired output. This information should enable the construction of simpler and more generalizable models. As a first step, it is useful to consider population models as composed to two, partly separable classes, one comprising the purely mechanical descriptors of dynamics from given demographic parameter values, and the other describing the modulation of the demographic parameters by environmental factors (changes in physical environment, species interactions, pathogens, xenobiotic chemicals). This division permits sensitivity analyses to be run on the first of these classes, providing guidance for subsequent model simplification. We here apply such a sensitivity analysis to network models of mammalian and avian population dynamics.
Determination of nitrogen balance in agroecosystems.
Sainju, Upendra M
2017-01-01
Nitrogen balance in agroecosystems provides a quantitative framework of N inputs and outputs and retention in the soil that examines the sustainability of agricultural productivity and soil and environmental quality. Nitrogen inputs include N additions from manures and fertilizers, atmospheric depositions including wet and dry depositions, irrigation water, and biological N fixation. Nitrogen outputs include N removal in crop grain and biomass and N losses through leaching, denitrification, volatilization, surface runoff, erosion, gas emissions, and plant senescence. Nitrogen balance, which is the difference between N inputs and outputs, can be reflected in changes in soil total (organic + inorganic) N during the course of the experiment duration due to N immobilization and mineralization. While increased soil N retention and mineralization can enhance crop yields and decrease N fertilization rate, reduced N losses through N leaching and gas emissions (primarily NH 4 and NO x emissions, out of which N 2 O is a potent greenhouse gas) can improve water and air quality. •This paper discusses measurements and estimations (for non-measurable parameters due to complexity) of all inputs and outputs of N as well as changes in soil N storage during the course of the experiment to calculate N balance.•The method shows N flows, retention in the soil, and losses to the environment from agroecosystems.•The method can be used to measure agroecosystem performance and soil and environmental quality from agricultural practices.
A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.
Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E
2016-06-21
We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.
Using aerial images for establishing a workflow for the quantification of water management measures
NASA Astrophysics Data System (ADS)
Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg
2017-04-01
Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
Impact of environmental inputs on reverse-engineering approach to network structures.
Wu, Jianhua; Sinfield, James L; Buchanan-Wollaston, Vicky; Feng, Jianfeng
2009-12-04
Uncovering complex network structures from a biological system is one of the main topic in system biology. The network structures can be inferred by the dynamical Bayesian network or Granger causality, but neither techniques have seriously taken into account the impact of environmental inputs. With considerations of natural rhythmic dynamics of biological data, we propose a system biology approach to reveal the impact of environmental inputs on network structures. We first represent the environmental inputs by a harmonic oscillator and combine them with Granger causality to identify environmental inputs and then uncover the causal network structures. We also generalize it to multiple harmonic oscillators to represent various exogenous influences. This system approach is extensively tested with toy models and successfully applied to a real biological network of microarray data of the flowering genes of the model plant Arabidopsis Thaliana. The aim is to identify those genes that are directly affected by the presence of the sunlight and uncover the interactive network structures associating with flowering metabolism. We demonstrate that environmental inputs are crucial for correctly inferring network structures. Harmonic causal method is proved to be a powerful technique to detect environment inputs and uncover network structures, especially when the biological data exhibit periodic oscillations.
Computer control of a microgravity mammalian cell bioreactor
NASA Technical Reports Server (NTRS)
Hall, William A.
1987-01-01
The initial steps taken in developing a completely menu driven and totally automated computer control system for a bioreactor are discussed. This bioreactor is an electro-mechanical cell growth system cell requiring vigorous control of slowly changing parameters, many of which are so dynamically interactive that computer control is a necessity. The process computer will have two main functions. First, it will provide continuous environmental control utilizing low signal level transducers as inputs and high powered control devices such as solenoids and motors as outputs. Secondly, it will provide continuous environmental monitoring, including mass data storage and periodic data dumps to a supervisory computer.
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
R2 Water Quality Portal Monitoring Stations
The Water Quality Data Portal (WQP) provides an easy way to access data stored in various large water quality databases. The WQP provides various input parameters on the form including location, site, sampling, and date parameters to filter and customize the returned results. The The Water Quality Portal (WQP) is a cooperative service sponsored by the United States Geological Survey (USGS), the Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council (NWQMC) that integrates publicly available water quality data from the USGS National Water Information System (NWIS) the EPA STOrage and RETrieval (STORET) Data Warehouse, and the USDA ARS Sustaining The Earth??s Watersheds - Agricultural Research Database System (STEWARDS).
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Dynamic Probabilistic Modeling of Environmental Emissions of Engineered Nanomaterials.
Sun, Tian Yin; Bornhöft, Nikolaus A; Hungerbühler, Konrad; Nowack, Bernd
2016-05-03
The need for an environmental risk assessment for engineered nanomaterials (ENM) necessitates the knowledge about their environmental concentrations. Despite significant advances in analytical methods, it is still not possible to measure the concentrations of ENM in natural systems. Material flow and environmental fate models have been used to provide predicted environmental concentrations. However, almost all current models are static and consider neither the rapid development of ENM production nor the fact that many ENM are entering an in-use stock and are released with a lag phase. Here we use dynamic probabilistic material flow modeling to predict the flows of four ENM (nano-TiO2, nano-ZnO, nano-Ag and CNT) to the environment and to quantify their amounts in (temporary) sinks such as the in-use stock and ("final") environmental sinks such as soil and sediment. Caused by the increase in production, the concentrations of all ENM in all compartments are increasing. Nano-TiO2 had far higher concentrations than the other three ENM. Sediment showed in our worst-case scenario concentrations ranging from 6.7 μg/kg (CNT) to about 40 000 μg/kg (nano-TiO2). In most cases the concentrations in waste incineration residues are at the "mg/kg" level. The flows to the environment that we provide will constitute the most accurate and reliable input of masses for environmental fate models which are using process-based descriptions of the fate and behavior of ENM in natural systems and rely on accurate mass input parameters.
NASA Astrophysics Data System (ADS)
Clark, Michael; Tilman, David
2017-06-01
Global agricultural feeds over 7 billion people, but is also a leading cause of environmental degradation. Understanding how alternative agricultural production systems, agricultural input efficiency, and food choice drive environmental degradation is necessary for reducing agriculture’s environmental impacts. A meta-analysis of life cycle assessments that includes 742 agricultural systems and over 90 unique foods produced primarily in high-input systems shows that, per unit of food, organic systems require more land, cause more eutrophication, use less energy, but emit similar greenhouse gas emissions (GHGs) as conventional systems; that grass-fed beef requires more land and emits similar GHG emissions as grain-feed beef; and that low-input aquaculture and non-trawling fisheries have much lower GHG emissions than trawling fisheries. In addition, our analyses show that increasing agricultural input efficiency (the amount of food produced per input of fertilizer or feed) would have environmental benefits for both crop and livestock systems. Further, for all environmental indicators and nutritional units examined, plant-based foods have the lowest environmental impacts; eggs, dairy, pork, poultry, non-trawling fisheries, and non-recirculating aquaculture have intermediate impacts; and ruminant meat has impacts ∼100 times those of plant-based foods. Our analyses show that dietary shifts towards low-impact foods and increases in agricultural input use efficiency would offer larger environmental benefits than would switches from conventional agricultural systems to alternatives such as organic agriculture or grass-fed beef.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Cortical Specializations Underlying Fast Computations
Volgushev, Maxim
2016-01-01
The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988
Dealing with uncertainties in environmental burden of disease assessment
2009-01-01
Disability Adjusted Life Years (DALYs) combine the number of people affected by disease or mortality in a population and the duration and severity of their condition into one number. The environmental burden of disease is the number of DALYs that can be attributed to environmental factors. Environmental burden of disease estimates enable policy makers to evaluate, compare and prioritize dissimilar environmental health problems or interventions. These estimates often have various uncertainties and assumptions which are not always made explicit. Besides statistical uncertainty in input data and parameters – which is commonly addressed – a variety of other types of uncertainties may substantially influence the results of the assessment. We have reviewed how different types of uncertainties affect environmental burden of disease assessments, and we give suggestions as to how researchers could address these uncertainties. We propose the use of an uncertainty typology to identify and characterize uncertainties. Finally, we argue that uncertainties need to be identified, assessed, reported and interpreted in order for assessment results to adequately support decision making. PMID:19400963
Section 4. The GIS Weasel User's Manual
Viger, Roland J.; Leavesley, George H.
2007-01-01
INTRODUCTION The GIS Weasel was designed to aid in the preparation of spatial information for input to lumped and distributed parameter hydrologic or other environmental models. The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to a user's model and to generate parameters from those maps. The operation of the GIS Weasel does not require the user to be a GIS expert, only that the user have an understanding of the spatial information requirements of the environmental simulation model being used. The GIS Weasel software system uses a GIS-based graphical user interface (GUI), the C programming language, and external scripting languages. The software will run on any computing platform where ArcInfo Workstation (version 8.0.2 or later) and the GRID extension are accessible. The user controls the processing of the GIS Weasel by interacting with menus, maps, and tables. The purpose of this document is to describe the operation of the software. This document is not intended to describe the usage of this software in support of any particular environmental simulation model. Such guides are published separately.
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.
2012-04-01
A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2012-12-01
In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
2009-12-01
Area IMPLND Impervious Land Cover INFILT Interflow Inflow Parameter (related to infiltration capacity of the soil ) INSUR Manning’s N for the...Km) SCCWRP Southern California Coastal Water Research Project SCS Soil Conservation Service SGA Shellfish Growing Area SPAWAR Space and Naval...UCI User Control Input USACE U.S. Army Corps of Engineers USEPA U.S. Environmental Protection Agency USGS U.S. Geological Survey xix USLE Universal
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.
Zhang, Bo; Peng, Beihua; Liu, Mingchu
2012-01-01
This paper presents an overview of the resources use and environmental impact of the Chinese industry during 1997-2006. For the purpose of this analysis the thermodynamic concept of exergy has been employed both to quantify and aggregate the resources input and the environmental emissions arising from the sector. The resources input and environmental emissions show an increasing trend in this period. Compared with 47568.7 PJ in 1997, resources input in 2006 increased by 75.4% and reached 83437.9 PJ, of which 82.5% came from nonrenewable resources, mainly from coal and other energy minerals. Furthermore, the total exergy of environmental emissions was estimated to be 3499.3 PJ in 2006, 1.7 times of that in 1997, of which 93.4% was from GHG emissions and only 6.6% from "three wastes" emissions. A rapid increment of the nonrenewable resources input and GHG emissions over 2002-2006 can be found, owing to the excessive expansion of resource- and energy-intensive subsectors. Exergy intensities in terms of resource input intensity and environmental emission intensity time-series are also calculated, and the trends are influenced by the macroeconomic situation evidently, particularly by the investment-derived economic development in recent years. Corresponding policy implications to guide a more sustainable industry system are addressed.
A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth
Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai
2017-01-01
State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565
A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.
Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai
2017-01-01
State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Environmental efficiency of energy, materials, and emissions.
Yagi, Michiyuki; Fujii, Hidemichi; Hoang, Vincent; Managi, Shunsuke
2015-09-15
This study estimates the environmental efficiency of international listed firms in 10 worldwide sectors from 2007 to 2013 by applying an order-m method, a non-parametric approach based on free disposal hull with subsampling bootstrapping. Using a conventional output of gross profit and two conventional inputs of labor and capital, this study examines the order-m environmental efficiency accounting for the presence of each of 10 undesirable inputs/outputs and measures the shadow prices of each undesirable input and output. The results show that there is greater potential for the reduction of undesirable inputs rather than bad outputs. On average, total energy, electricity, or water usage has the potential to be reduced by 50%. The median shadow prices of undesirable inputs, however, are much higher than the surveyed representative market prices. Approximately 10% of the firms in the sample appear to be potential sellers or production reducers in terms of undesirable inputs/outputs, which implies that the price of each item at the current level has little impact on most of the firms. Moreover, this study shows that the environmental, social, and governance activities of a firm do not considerably affect environmental efficiency. Copyright © 2015 Elsevier Ltd. All rights reserved.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Veselá, S; Kingma, B R M; Frijns, A J H
2017-03-01
Local thermal sensation modeling gained importance due to developments in personalized and locally applied heating and cooling systems in office environments. The accuracy of these models depends on skin temperature prediction by thermophysiological models, which in turn rely on accurate environmental and personal input data. Environmental parameters are measured or prescribed, but personal factors such as clothing properties and metabolic rates have to be estimated. Data for estimating the overall values of clothing properties and metabolic rates are available in several papers and standards. However, local values are more difficult to retrieve. For local clothing, this study revealed that full and consistent data sets are not available in the published literature for typical office clothing sets. Furthermore, the values for local heat production were not verified for characteristic office activities, but were adapted empirically. Further analyses showed that variations in input parameters can lead to local skin temperature differences (∆T skin,loc = 0.4-4.4°C). These differences can affect the local sensation output, where ∆T skin,loc = 1°C is approximately one step on a 9-point thermal sensation scale. In conclusion, future research should include a systematic study of local clothing properties and the development of feasible methods for measuring and validating local heat production. © 2016 The Authors. Indoor Air published by John Wiley & Sons Ltd.
Non-input analysis for incomplete trapping irreversible tracer with PET.
Ohya, Tomoyuki; Kikuchi, Tatsuya; Fukumura, Toshimitsu; Zhang, Ming-Rong; Irie, Toshiaki
2013-07-01
When using metabolic trapping type tracers, the tracers are not always trapped in the target tissue; i.e., some are completely trapped in the target, but others can be eliminated from the target tissue at a measurable rate. The tracers that can be eliminated are termed 'incomplete trapping irreversible tracers'. These incomplete trapping irreversible tracers may be clinically useful when the tracer β-value, the ratio of the tracer (metabolite) elimination rate to the tracer efflux rate, is under approximately 0.1. In this study, we propose a non-input analysis for incomplete trapping irreversible tracers based on the shape analysis (Shape), a non-input analysis used for irreversible tracers. A Monte Carlo simulation study based on experimental monkey data with two actual PET tracers (a complete trapping irreversible tracer [(11)C]MP4A and an incomplete trapping irreversible tracer [(18)F]FEP-4MA) was performed to examine the effects of the environmental error and the tracer elimination rate on the estimation of the k3-parameter (corresponds to metabolic rate) using Shape (original) and modified Shape (M-Shape) analysis. The simulation results were also compared with the experimental results obtained with the two PET tracers. When the tracer β-value was over 0.03, the M-Shape method was superior to the Shape method for the estimation of the k3-parameter. The simulation results were also in reasonable agreement with the experimental ones. M-Shape can be used as the non-input analysis of incomplete trapping irreversible tracers for PET study. Copyright © 2013 Elsevier Inc. All rights reserved.
Zhang, Bo; Peng, Beihua; Liu, Mingchu
2012-01-01
This paper presents an overview of the resources use and environmental impact of the Chinese industry during 1997–2006. For the purpose of this analysis the thermodynamic concept of exergy has been employed both to quantify and aggregate the resources input and the environmental emissions arising from the sector. The resources input and environmental emissions show an increasing trend in this period. Compared with 47568.7 PJ in 1997, resources input in 2006 increased by 75.4% and reached 83437.9 PJ, of which 82.5% came from nonrenewable resources, mainly from coal and other energy minerals. Furthermore, the total exergy of environmental emissions was estimated to be 3499.3 PJ in 2006, 1.7 times of that in 1997, of which 93.4% was from GHG emissions and only 6.6% from “three wastes” emissions. A rapid increment of the nonrenewable resources input and GHG emissions over 2002–2006 can be found, owing to the excessive expansion of resource- and energy-intensive subsectors. Exergy intensities in terms of resource input intensity and environmental emission intensity time-series are also calculated, and the trends are influenced by the macroeconomic situation evidently, particularly by the investment-derived economic development in recent years. Corresponding policy implications to guide a more sustainable industry system are addressed. PMID:22973176
NASA Astrophysics Data System (ADS)
Singh, Jagdeep; Sharma, Rajiv Kumar
2016-12-01
Electrical discharge machining (EDM) is a well-known nontraditional manufacturing process to machine the difficult-to-machine (DTM) materials which have unique hardness properties. Researchers have successfully performed hybridization to improve this process by incorporating powders into the EDM process known as powder-mixed EDM process. This process drastically improves process efficiency by increasing material removal rate, micro-hardness, as well as reducing the tool wear rate and surface roughness. EDM also has some input parameters, including pulse-on time, dielectric levels and its type, current setting, flushing pressure, and so on, which have a significant effect on EDM performance. However, despite their positive influence, investigating the effects of these parameters on environmental conditions is necessary. Most studies demonstrate the use of kerosene oil as dielectric fluid. Nevertheless, in this work, the authors highlight the findings with respect to three different dielectric fluids, including kerosene oil, EDM oil, and distilled water using one-variable-at-a-time approach for machining as well as environmental aspects. The hazard and operability analysis is employed to identify the inherent safety factors associated with powder-mixed EDM of WC-Co.
Servien, Rémi; Mamy, Laure; Li, Ziang; Rossard, Virginie; Latrille, Eric; Bessac, Fabienne; Patureau, Dominique; Benoit, Pierre
2014-09-01
Following legislation, the assessment of the environmental risks of 30000-100000 chemical substances is required for their registration dossiers. However, their behavior in the environment and their transfer to environmental components such as water or atmosphere are studied for only a very small proportion of the chemical in laboratory tests or monitoring studies because it is time-consuming and/or cost prohibitive. Therefore, the objective of this work was to develop a new methodology, TyPol, to classify organic compounds, and their degradation products, according to both their behavior in the environment and their molecular properties. The strategy relies on partial least squares analysis and hierarchical clustering. The calculation of molecular descriptors is based on an in silico approach, and the environmental endpoints (i.e. environmental parameters) are extracted from several available databases and literature. The classification of 215 organic compounds inputted in TyPol for this proof-of-concept study showed that the combination of some specific molecular descriptors could be related to a particular behavior in the environment. TyPol also provided an analysis of similarities (or dissimilarities) between organic compounds and their degradation products. Among the 24 degradation products that were inputted, 58% were found in the same cluster as their parents. The robustness of the method was tested and shown to be good. TyPol could help to predict the environmental behavior of a "new" compound (parent compound or degradation product) from its affiliation to one cluster, but also to select representative substances from a large data set in order to answer some specific questions regarding their behavior in the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Fire Risk Assessment of Some Indian Coals Using Radial Basis Function (RBF) Technique
NASA Astrophysics Data System (ADS)
Nimaje, Devidas; Tripathy, Debi Prasad
2017-04-01
Fires, whether surface or underground, pose serious and environmental problems in the global coal mining industry. It is causing huge loss of coal due to burning and loss of lives, sterilization of coal reserves and environmental pollution. Most of the instances of coal mine fires happening worldwide are mainly due to the spontaneous combustion. Hence, attention must be paid to take appropriate measures to prevent occurrence and spread of fire. In this paper, to evaluate the different properties of coals for fire risk assessment, forty-nine in situ coal samples were collected from major coalfields of India. Intrinsic properties viz. proximate and ultimate analysis; and susceptibility indices like crossing point temperature, flammability temperature, Olpinski index and wet oxidation potential method of Indian coals were carried out to ascertain the liability of coal to spontaneous combustion. Statistical regression analysis showed that the parameters of ultimate analysis provide significant correlation with all investigated susceptibility indices as compared to the parameters of proximate analysis. Best correlated parameters (ultimate analysis) were used as inputs to the radial basis function network model. The model revealed that Olpinski index can be used as a reliable method to assess the liability of Indian coals to spontaneous combustion.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
NASA Astrophysics Data System (ADS)
Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.
2016-12-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Techniques to Access Databases and Integrate Data for Hydrologic Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.
2009-06-17
This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed.more » The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and retrieve the required data, and their ability to integrate the data into environmental models using the FRAMES environment.« less
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Bernard, Pierre-Yves; Benoît, Marc; Roger-Estrade, Jean; Plantureux, Sylvain
2016-12-01
The objectives of this comparison of two biophysical models of nitrogen losses were to evaluate first whether results were similar and second whether both were equally practical for use by non-scientist users. Results were obtained with the crop model STICS and the environmental model AGRIFLUX based on nitrogen loss simulations across a small groundwater catchment area (<1 km(2)) located in the Lorraine region in France. Both models simulate the influences of leaching and cropping systems on nitrogen losses in a relevant manner. The authors conclude that limiting the simulations to areas where soils with a greater risk of leaching cover a significant spatial extent would likely yield acceptable results because those soils have more predictable leaching of nitrogen. In addition, the choice of an environmental model such as AGRIFLUX which requires fewer parameters and input variables seems more user-friendly for agro-environmental assessment. The authors then discuss additional challenges for non-scientists such as lack of parameter optimization, which is essential to accurately assessing nitrogen fluxes and indirectly not to limit the diversity of uses of simulated results. Despite current restrictions, with some improvement, biophysical models could become useful environmental assessment tools for non-scientists. Copyright © 2016 Elsevier Ltd. All rights reserved.
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
Leistra, Minze; Wolters, André; van den Berg, Frederik
2008-06-01
Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.
Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
Bender, David A.; Asher, William E.; Zogorski, John S.
2003-01-01
This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
NASA Astrophysics Data System (ADS)
Nguyen, Tuyen Van; Cho, Woon-Seok; Kim, Hungsoo; Jung, Il Hyo; Kim, YongKuk; Chon, Tae-Soo
2014-03-01
Definition of ecological integrity based on community analysis has long been a critical issue in risk assessment for sustainable ecosystem management. In this work, two indices (i.e., Shannon index and exergy) were selected for the analysis of community properties of benthic macroinvertebrate community in streams in Korea. For this purpose, the means and variances of both indices were analyzed. The results found an extra scope of structural and functional properties in communities in response to environmental variabilities and anthropogenic disturbances. The combination of these two parameters (four indices) was feasible in identification of disturbance agents (e.g., industrial pollution or organic pollution) and specifying states of communities. The four-aforementioned parameters (means and variances of Shannon index and exergy) were further used as input data in a self-organizing map for the characterization of water quality. Our results suggested that Shannon index and exergy in combination could be utilized as a suitable reference system and would be an efficient tool for assessment of the health of aquatic ecosystems exposed to environmental disturbances.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Villas Boas, M. D.; Olivera, F.; Azevedo, J. S.
2013-12-01
The evaluation of water quality through 'indexes' is widely used in environmental sciences. There are a number of methods available for calculating water quality indexes (WQI), usually based on site-specific parameters. In Brazil, WQI were initially used in the 1970s and were adapted from the methodology developed in association with the National Science Foundation (Brown et al, 1970). Specifically, the WQI 'IQA/SCQA', developed by the Institute of Water Management of Minas Gerais (IGAM), is estimated based on nine parameters: Temperature Range, Biochemical Oxygen Demand, Fecal Coliforms, Nitrate, Phosphate, Turbidity, Dissolved Oxygen, pH and Electrical Conductivity. The goal of this study was to develop a model for calculating the IQA/SCQA, for the Piabanha River basin in the State of Rio de Janeiro (Brazil), using only the parameters measurable by a Multiparameter Water Quality Sonde (MWQS) available in the study area. These parameters are: Dissolved Oxygen, pH and Electrical Conductivity. The use of this model will allow to further the water quality monitoring network in the basin, without requiring significant increases of resources. The water quality measurement with MWQS is less expensive than the laboratory analysis required for the other parameters. The water quality data used in the study were obtained by the Geological Survey of Brazil in partnership with other public institutions (i.e. universities and environmental institutes) as part of the project "Integrated Studies in Experimental and Representative Watersheds". Two models were developed to correlate the values of the three measured parameters and the IQA/SCQA values calculated based on all nine parameters. The results were evaluated according to the following validation statistics: coefficient of determination (R2), Root Mean Square Error (RMSE), Akaike information criterion (AIC) and Final Prediction Error (FPE). The first model was a linear stepwise regression between three independent variables (input) and one dependent variable (output) to establish an equation relating input to output. This model produced the following statistics: R2 = 0.85, RMSE = 6.19, AIC =0.65 and FPE = 1.93. The second model was a Feedforward Neural Network with one tan-sigmoid hidden layer (4 neurons) and one linear output layer. The neural network was trained based on a backpropagation algorithm using the input as predictors and the output as target. The following statistics were found: R2 = 0.95, RMSE = 4.86, AIC= 0.33 and FPE = 1.39. The second model produced a better fit than the first one, having a greater R2 and smaller RMSE, AIC and FPE. The best performance of the second method can be attributed to the fact that the water quality parameters often exhibit nonlinear behaviors and neural networks are capable of representing nonlinear relationship efficiently, while the regression is limited to linear relationships. References: Brown, R.M., McLelland, N.I., Deininger, R.A., Tozer, R.G.1970. A Water Quality Index-Do we dare? Water & Sewage Works, October: 339-343.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
A Comparative Analysis of Life-Cycle Assessment Tools for ...
We identified and evaluated five life-cycle assessment tools that community decision makers can use to assess the environmental and economic impacts of end-of-life (EOL) materials management options. The tools evaluated in this report are waste reduction mode (WARM), municipal solid waste-decision support tool (MSW-DST), solid waste optimization life-cycle framework (SWOLF), environmental assessment system for environmental technologies (EASETECH), and waste and resources assessment for the environment (WRATE). WARM, MSW-DST, and SWOLF were developed for US-specific materials management strategies, while WRATE and EASETECH were developed for European-specific conditions. All of the tools (with the exception of WARM) allow specification of a wide variety of parameters (e.g., materials composition and energy mix) to a varying degree, thus allowing users to model specific EOL materials management methods even outside the geographical domain they are originally intended for. The flexibility to accept user-specified input for a large number of parameters increases the level of complexity and the skill set needed for using these tools. The tools were evaluated and compared based on a series of criteria, including general tool features, the scope of the analysis (e.g., materials and processes included), and the impact categories analyzed (e.g., climate change, acidification). A series of scenarios representing materials management problems currently relevant to c
Comparison of U.S. Environmental Protection Agency’s CAP88 PC versions 3.0 and 4.0
Jannik, Tim; Farfan, Eduardo B.; Dixon, Ken; ...
2015-08-01
The Savannah River National Laboratory (SRNL) with the assistance of Georgia Regents University, completed a comparison of the U.S. Environmental Protection Agency's (EPA) environmental dosimetry code CAP88 PC V3.0 with the recently developed V4.0. CAP88 is a set of computer programs and databases used for estimation of dose and risk from radionuclide emissions to air. At the U.S. Department of Energy's Savannah River Site, CAP88 is used by SRNL for determining compliance with EPA's National Emission Standards for Hazardous Air Pollutants (40 CFR 61, Subpart H) regulations. Using standardized input parameters, individual runs were conducted for each radionuclide within itsmore » corresponding database. Some radioactive decay constants, human usage parameters, and dose coefficients changed between the two versions, directly causing a proportional change in the total effective 137Cs, 3H, 129I, 239Pu, and 90Sr) is provided. In general, the total effective doses will decrease for alpha/beta emitters because of reduced inhalation and ingestion rates in V4.0. However, for gamma emitters, such as 60Co and 137Cs, the total effective doses will increase because of changes EPA made in the external ground shine calculations.« less
NASA Astrophysics Data System (ADS)
Koliopoulos, T. C.; Koliopoulou, G.
2007-10-01
We present an input-output solution for simulating the associated behavior and optimized physical needs of an environmental system. The simulations and numerical analysis determined the accurate boundary loads and areas that were required to interact for the proper physical operation of a complicated environmental system. A case study was conducted to simulate the optimum balance of an environmental system based on an artificial intelligent multi-interacting input-output numerical scheme. The numerical results were focused on probable further environmental management techniques, with the objective of minimizing any risks and associated environmental impact to protect the quality of public health and the environment. Our conclusions allowed us to minimize the associated risks, focusing on probable cases in an emergency to protect the surrounded anthropogenic or natural environment. Therefore, the lining magnitude could be determined for any useful associated technical works to support the environmental system under examination, taking into account its particular boundary necessities and constraints.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
NASA Technical Reports Server (NTRS)
Frost, W.; Long, B. H.; Turner, R. E.
1978-01-01
The guidelines are given in the form of design criteria relative to wind speed, wind shear, turbulence, wind direction, ice and snow loading, and other climatological parameters which include rain, hail, thermal effects, abrasive and corrosive effects, and humidity. This report is a presentation of design criteria in an engineering format which can be directly input to wind turbine generator design computations. Guidelines are also provided for developing specialized wind turbine generators or for designing wind turbine generators which are to be used in a special region of the United States.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
López-Sabirón, Ana M; Fleiger, Kristina; Schäfer, Stefan; Antoñanzas, Javier; Irazustabarrena, Ane; Aranda-Usón, Alfonso; Ferreira, Germán A
2015-08-01
Plasma torch gasification (PTG) is currently researched as a technology for solid waste recovery. However, scientific studies based on evaluating its environmental implications considering the life cycle assessment (LCA) methodology are lacking. Therefore, this work is focused on comparing the environmental effect of the emissions of syngas combustion produced by refuse derived fuel (RDF) and PTG as alternative fuels, with that related to fossil fuel combustion in the cement industry. To obtain real data, a semi-industrial scale pilot plant was used to perform experimental trials on RDF-PTG.The results highlight that PTG for waste to energy recovery in the cement industry is environmentally feasible considering its current state of development. A reduction in every impact category was found when a total or partial substitution of alternative fuel for conventional fuel in the calciner firing (60 % of total thermal energy input) was performed. Furthermore, the results revealed that electrical energy consumption in PTG is also an important parameter from the LCA approach. © The Author(s) 2015.
Library Construction from Subnanogram DNA for Pelagic Sea Water and Deep-Sea Sediments
Hirai, Miho; Nishi, Shinro; Tsuda, Miwako; Sunamura, Michinari; Takaki, Yoshihiro; Nunoura, Takuro
2017-01-01
Shotgun metagenomics is a low biased technology for assessing environmental microbial diversity and function. However, the requirement for a sufficient amount of DNA and the contamination of inhibitors in environmental DNA leads to difficulties in constructing a shotgun metagenomic library. We herein examined metagenomic library construction from subnanogram amounts of input environmental DNA from subarctic surface water and deep-sea sediments using two library construction kits: the KAPA Hyper Prep Kit and Nextera XT DNA Library Preparation Kit, with several modifications. The influence of chemical contaminants associated with these environmental DNA samples on library construction was also investigated. Overall, shotgun metagenomic libraries were constructed from 1 pg to 1 ng of input DNA using both kits without harsh library microbial contamination. However, the libraries constructed from 1 pg of input DNA exhibited larger biases in GC contents, k-mers, or small subunit (SSU) rRNA gene compositions than those constructed from 10 pg to 1 ng DNA. The lower limit of input DNA for low biased library construction in this study was 10 pg. Moreover, we revealed that technology-dependent biases (physical fragmentation and linker ligation vs. tagmentation) were larger than those due to the amount of input DNA. PMID:29187708
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam
ERIC Educational Resources Information Center
Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen
2012-01-01
This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.
NASA Astrophysics Data System (ADS)
Coelho, Susana; Pérez-Ruzafa, Angel; Gamito, Sofia
2015-12-01
Benthic macroinvertebrate communities and environmental conditions were studied in two intermittently closed and open coastal lakes and lagoons (ICOLLs), located in southern Algarve (Foz do Almargem e Salgados), with the purpose of evaluating the effects of organic pollution, originated mainly from wastewater discharges, and the physical stress caused by the irregular opening of the lagoons. Most of the year, lagoons were isolated from the sea, receiving the freshwater inputs from small rivers and in Salgados, also from the effluents of a wastewater plant. According to environmental and biotic conditions, Foz do Almargem presented a greater marine influence and a lower trophic state (mesotrophic) than Salgados (hypereutrophic). Benthic macroinvertebrate communities in the lagoons were distinct, just as their relations with environmental parameters. Mollusca were the most abundant macroinvertebrates in Foz do Almargem, while Insecta, Oligochaeta and Crustacea were more relevant in Salgados. Corophium multisetosum occurred exclusively in Salgados stations and, just as Chironomus sp., other Insecta and Oligochaeta, densities were positively related to total phosphorus, clay content and chlorophyll a concentration in the sediment, chlorophyll a concentration in water and with total dissolved inorganic nitrogen. Abra segmentum, Cerastoderma glaucum, Peringia ulvae and Ecrobia ventrosa occurred only in Foz do Almargem, with lower values of the above mentioned parameters. Both lagoons were dominated by deposit feeders and taxa tolerant to environmental stress, although in Salgados there was a greater occurrence of opportunistic taxa associated to pronounced unbalanced situations, due to excess organic matter enrichment.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Insolation-oriented model of photovoltaic module using Matlab/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Huan-Liang
2010-07-15
This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook
1980-04-01
82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance
Functional Differences between Statistical Learning with and without Explicit Training
ERIC Educational Resources Information Center
Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Agronomic and environmental implications of enhanced s-triazine degradation
Krutz, L. J.; Dale L. Shaner,; Mark A. Weaver,; Webb, Richard M.; Zablotowicz, Robert M.; Reddy, Krishna N.; Huang, Y.; Thompson, S. J.
2010-01-01
Novel catabolic pathways enabling rapid detoxification of s-triazine herbicides have been elucidated and detected at a growing number of locations. The genes responsible for s-triazine mineralization, i.e. atzABCDEF and trzNDF, occur in at least four bacterial phyla and are implicated in the development of enhanced degradation in agricultural soils from all continents except Antarctica. Enhanced degradation occurs in at least nine crops and six crop rotation systems that rely on s-triazine herbicides for weed control, and, with the exception of acidic soil conditions and s-triazine application frequency, adaptation of the microbial population is independent of soil physiochemical properties and cultural management practices. From an agronomic perspective, residual weed control could be reduced tenfold in s-triazine-adapted relative to non-adapted soils. From an environmental standpoint, the off-site loss of total s-triazine residues could be overestimated 13-fold in adapted soils if altered persistence estimates and metabolic pathways are not reflected in fate and transport models. Empirical models requiring soil pH and s-triazine use history as input parameters predict atrazine persistence more accurately than historical estimates, thereby allowing practitioners to adjust weed control strategies and model input values when warranted.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
NASA Astrophysics Data System (ADS)
Shamkhali Chenar, S.; Deng, Z.
2017-12-01
Pathogenic viruses pose a significant public health threat and economic losses to shellfish industry in the coastal environment. Norovirus is a contagious virus and the leading cause of epidemic gastroenteritis following consumption of oysters harvested from sewage-contaminated waters. While it is challenging to detect noroviruses in coastal waters due to the lack of sensitive and routine diagnostic methods, machine learning techniques are allowing us to prevent or at least reduce the risks by developing effective predictive models. This study attempts to develop an algorithm between historical norovirus outbreak reports and environmental parameters including water temperature, solar radiation, water level, salinity, precipitation, and wind. For this purpose, the Random Forests statistical technique was utilized to select relevant environmental parameters and their various combinations with different time lags controlling the virus distribution in oyster harvesting areas along the Louisiana Coast. An Artificial Neural Networks (ANN) approach was then presented to predict the outbreaks using a final set of input variables. Finally, a sensitivity analysis was conducted to evaluate relative importance and contribution of the input variables to the model output. Findings demonstrated that the developed model was capable of reproducing historical oyster norovirus outbreaks along the Louisiana Coast with the overall accuracy of than 99.83%, demonstrating the efficacy of the model. Moreover, the increase in water temperature, solar radiation, water level, and salinity, and the decrease in wind and rainfall are associated with the reduction in the model-predicted risk of norovirus outbreak according to sensitivity analysis results. In conclusion, the presented machine learning approach provided reliable tools for predicting potential norovirus outbreaks and could be used for early detection of possible outbreaks and reduce the risk of norovirus to public health and the seafood industry.
NASA Astrophysics Data System (ADS)
Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru
In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
NASA Astrophysics Data System (ADS)
Aitken, M.; Yelverton, W. H.; Dodder, R. S.; Loughlin, D. H.
2014-12-01
Among the diverse menu of technologies for reducing greenhouse gas (GHG) emissions, one option involves pairing carbon capture and storage (CCS) with the generation of synthetic fuels and electricity from co-processed coal and biomass. In this scheme, the feedstocks are first converted to syngas, from which a Fischer-Tropsch (FT) process reactor and combined cycle turbine produce liquid fuels and electricity, respectively. With low concentrations of sulfur and other contaminants, the synthetic fuels are expected to be cleaner than conventional crude oil products. And with CO2 as an inherent byproduct of the FT process, most of the GHG emissions can be eliminated by simply compressing the CO2 output stream for pipeline transport. In fact, the incorporation of CCS at such facilities can result in very low—or perhaps even negative—net GHG emissions, depending on the fraction of biomass as input and its CO2 signature. To examine the potential market penetration and environmental impact of coal and biomass to liquids and electricity (CBtLE), which encompasses various possible combinations of input and output parameters within the overall energy landscape, a system-wide analysis is performed using the MARKet ALlocation (MARKAL) model. With resource supplies, energy conversion technologies, end-use demands, costs, and pollutant emissions as user-defined inputs, MARKAL calculates—using linear programming techniques—the least-cost set of technologies that satisfy the specified demands subject to environmental and policy constraints. In this framework, the U.S. Environmental Protection Agency (EPA) has developed both national and regional databases to characterize assorted technologies in the industrial, commercial, residential, transportation, and generation sectors of the U.S. energy system. Here, the EPA MARKAL database is updated to include the costs and emission characteristics of CBtLE using figures from the literature. Nested sensitivity analysis is then carried out to investigate the impact of various assumptions and scenarios, such as the plant capacity factor, capital costs, CO2 mitigation targets, oil prices, and CO2 storage costs.
Timing resolution and time walk in SLiK SPAD: measurement and optimization
NASA Astrophysics Data System (ADS)
Fong, Bernicy S.; Davies, Murray; Deschamps, Pierre
2017-08-01
Timing resolution (or timing jitter) and time walk are separate parameters associated with a detector's response time. Studies have been done mostly on the time resolution of various single photon detectors [1]. As the designer and manufacturer of the ultra-low noise (ƙ-factor) silicon avalanche photodiode the SLiK SPAD, which is used in many single photon counting applications, we often get inquiries from customers to better understand how this detector behaves under different operating conditions. Hence, here we will be focusing on the study of these time related parameters specifically for the SLiK SPAD, as a way to provide the most direct information for users of this detector to help with its use more efficiently and effectively. We will be providing the study data on how these parameters can be affected by temperature (both intrinsic to the detector chip and environmental input based on operating conditions), operating voltage, photon wavelength, as well as light spot size. How these parameters can be optimized and the trade-offs from optimization from the desired performance will be presented.
Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minter, K.; Jannik, T.
LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© codemore » and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).« less
EnviroNET: An on-line environment data base for LDEF data
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1992-01-01
EnviroNET is an on-line, free form data base intended to provide a centralized depository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user friendly, menu driven format on networks that are connected globally and is available twenty-four hours a day, every day. The information updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government facilities, industry, universities, and ESA. The models accept parameter input from the user and calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic field, and ionosphere. A user friendly informative interface is standard for all the models with a pop-up window, help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solution for computational intense graphical applications to do 'What if' scenarios. A proposed plan for developing a repository of LDEF information for a user group concludes the presentation.
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todrita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments Solar UltraViolet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar back-ground-sevents impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, Sivakumara S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todirita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments - Solar Ultra Violet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar backgrounds/events impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Comparison of U.S. Environmental Protection Agency's CAP88 PC Versions 3.0 and 4.0.
Jannik, Tim; Farfan, Eduardo B; Dixon, Ken; Newton, Joseph; Sailors, Christopher; Johnson, Levi; Moore, Kelsey; Stahman, Richard
2015-08-01
The Savannah River National Laboratory (SRNL) with the assistance of Georgia Regents University, completed a comparison of the U.S. Environmental Protection Agency's (U.S. EPA) environmental dosimetry code CAP88 PC V3.0 with the recently developed V4.0. CAP88 is a set of computer programs and databases used for estimation of dose and risk from radionuclide emissions to air. At the U.S. Department of Energy's Savannah River Site, CAP88 is used by SRNL for determining compliance with U.S. EPA's National Emission Standards for Hazardous Air Pollutants (40 CFR 61, Subpart H) regulations. Using standardized input parameters, individual runs were conducted for each radionuclide within its corresponding database. Some radioactive decay constants, human usage parameters, and dose coefficients changed between the two versions, directly causing a proportional change in the total effective dose. A detailed summary for select radionuclides of concern at the Savannah River Site (60Co, 137Cs, 3H, 129I, 239Pu, and 90Sr) is provided. In general, the total effective doses will decrease for alpha/beta emitters because of reduced inhalation and ingestion rates in V4.0. However, for gamma emitters, such as 60Co and 137Cs, the total effective doses will increase because of changes U.S. EPA made in the external ground shine calculations.
NASA Astrophysics Data System (ADS)
El Rahman Hassoun, Abed
2017-04-01
Aiming to evaluate the effects of organic pollution, environmental parameters and phytoplankton community were monitored during a two-year period (from April 2010 till March 2012) in the central coast of Lebanon in the Levantine Sub-basin. Data were collected for hydrological (temperature and salinity), chemical (nitrites, nitrates and phosphates), and biological (chlorophyll-a and phytoplankton populations) parameters. Our results show that temperature follows its normal seasonal and annual cycles, usually noted in the Lebanese coastal waters. Salinity presents spatial and temporal variations with low values (19.07 - 39.6) in the areas affected by continental inputs. Significant fluctuations (P < 0.05) of nutrients, Chl-a concentrations and density of total phytoplanktonic cells were observed between the sites and through the years. Moreover, a perturbation of the natural phytoplanktonic succession and an occurrence of toxic or potentially harmful algae were noticed in the polluted sites, reflecting the influence of wastewater effluents on the coastal seawater equilibrium and thus on the Lebanese marine biodiversity. This study sheds the light on the current environmental condition of few coastal areas which could facilitate the management of their pollution sources. Keywords: Organic pollution, phytoplankton community, toxic algae, coastal water quality, Lebanon, Mediterranean Sea.
Assessment of impact of acoustic and nonacoustic parameters on performance and well-being
NASA Astrophysics Data System (ADS)
Mellert, Volker; Weber, Reinhard; Nocke, Christian
2004-05-01
It is of interest to estimate the influence of the environment in a specific work place area on the performance and well-being of people. Investigations have been carried out for the cabin environment of an airplane and for class rooms. Acoustics is only one issue of a variety of environmental factors, therefore the combined impact of temperature, humidity, air quality, lighting, vibration, etc. on human perception is the subject of psychophysical research. Methods for the objective assessment of subjective impressions have been developed for applications in acoustics for a long time, e.g., for concert hall acoustics, noise evaluation, and sound design. The methodology relies on questionnaires, measurement of acoustic parameters, ear-related signal processing and analysis, and on correlation of the physical input with subjective output. Methodology and results are presented from measurements of noise and vibration, temperature and humidity in aircraft simulators, and of reverberation, coloring, and lighting in a primary school, and of the environmental perception. [The work includes research with M. Klatte, A. Schick from the Psychology Department of Oldenburg University, and M. Meis from Hoerzentrum Oldenburg GmbH and with the European Project HEACE (for partners see www.heace.org).
NASA Astrophysics Data System (ADS)
Ferreyra, R.; Stockle, C. O.; Huggins, D. R.
2014-12-01
Soil water storage and dynamics are of critical importance for a variety of processes in terrestrial ecosystems, including agriculture. Many of those systems are under significant pressure in terms of water availability and use. Therefore, assessing alternative scenarios through hydrological models is an increasingly valuable exercise. Soil water holding capacity is defined by the concepts of soil field capacity and plant available water, which are directly related to soil physical properties. Both concepts define the energy status of water in the root system and closely interact with plant physiological processes. Furthermore, these concepts play a key role in the environmental transport of nutrients and pollutants. Soil physical parameters (e.g. saturated hydraulic conductivity, total porosity and water release curve) are required as input for field-scale soil water redistribution models. These parameters are normally not easy to measure or monitor, and estimation through pedotransfer functions is often inadequate. Our objectives are to improve field-scale hydrological modeling by: (1) assessing new undisturbed methodologies for determining important soil physical parameters necessary for model inputs; and (2) evaluating model outputs, making a detailed specification of soil parameters and the particular boundary condition that are driving water movement under two contrasting environments. Soil physical properties (saturated hydraulic conductivity and determination of water release curves) were quantified using undisturbed laboratory methodologies for two different soil textural classes (silt loam and sandy loam) and used to evaluate two soil water redistribution models (finite difference solution and hourly cascade approach). We will report on model corroboration results performed using in situ, continuous, field measurements with soil water content capacitance probes and digital tensiometers. Here, natural drainage and water redistribution were monitored following a controlled water application where the study areas were isolated from other water inputs and outputs. We will also report on the assessment of two soil water sensors (Decagon Devices 5TM capacitance probe and UMS T4 tensiometers) for the two soil textural classes in terms of consistency and replicability.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
NASA Astrophysics Data System (ADS)
El Mountassir, M.; Yaacoubi, S.; Dahmene, F.
2015-07-01
Intelligent feature extraction and advanced signal processing techniques are necessary for a better interpretation of ultrasonic guided waves signals either in structural health monitoring (SHM) or in nondestructive testing (NDT). Such signals are characterized by at least multi-modal and dispersive components. In addition, in SHM, these signals are closely vulnerable to environmental and operational conditions (EOCs), and can be severely affected. In this paper we investigate the use of Artificial Neural Network (ANN) to overcome these effects and to provide a reliable damage detection method with a minimal of false indications. An experimental case of study (full scale pipe) is presented. Damages sizes have been increased and their shapes modified in different steps. Various parameters such as the number of inputs and the number of hidden neurons were studied to find the optimal configuration of the neural network.
Modeling diffuse sources of surface water contamination with plant protection products
NASA Astrophysics Data System (ADS)
Wendland, Sandra; Bock, Michael; Böhner, Jürgen; Lembrich, David
2015-04-01
Entries of chemical pollutants in surface waters are a serious environmental problem. Among water pollutants plant protection products (ppp) from farming practice are of major concern not only for water suppliers and environmental agencies, but also for farmers and industrial manufacturers. Lost chemicals no longer fulfill their original purpose on the field, but lead to severe damage of the environment and surface waters. Besides point-source inputs of chemical pollutants, the diffuse-source inputs from agricultural procedures play an important and not yet sufficiently studied role concerning water quality. The two most important factors for diffuse inputs are erosion and runoff. The latter usually occurs before erosion begins, and is thus often not visible in hindsight. Only if it has come to erosion, it is obvious to expect runoff in foresight at this area, too. In addition to numerous erosion models, there are also few applications to model runoff processes available. However, these conventional models utilize approximations of catchment parameters based on long-term average values or theoretically calculated concentration peaks which can only provide indications to relative amounts. Our study aims to develop and validate a simplified spatially-explicit dynamic model with high spatiotemporal resolution that enables to measure current and forecast runoff potential not only at catchment scale but field-differentiated. This method allows very precise estimations of runoff risks and supports risk reduction measures to be targeted before fields are treated. By focusing on water pathways occurring on arable land, targeted risk reduction measures like buffer strips at certain points and adapted ppp use can be taken early and pollution of rivers and other surface waters through transported pesticides, fertilizers and their products could be nearly avoided or largely minimized. Using a SAGA-based physical-parametric modeling approach, major factors influencing runoff (relief, soil properties, weather conditions and crop coverage) are represented. Water balance parameters are modeled in daily steps, taking into account relief determined discharge pathways, runoff velocity and number of field boundaries passed until receiving streams are reached. Model development is based on a comprehensive monitoring campaign at 3 smaller catchments in North Rhine-Westphalia (Germany), equipped with two gauges each, upstream and downstream, an optical Trios probe and four Isco-Samplers. The temporal high resolution monitoring of discharge, ppp, orthophosphate and nitrate-nitrogen enables an evaluation of runoff simulations in relation with rain events. First model results suggest that the simulation of surface runoff pathways enables a spatial-explicit identification of fields contributing to pollutant inputs. We assume that targeted actions on few fields will help solving the problem of diffuse inputs of ppp in our surface water to a considerable extent.
Assessing contributory risk using economic input-output life-cycle analysis.
Miller, Ian; Shelly, Michael; Jonmaire, Paul; Lee, Richard V; Harbison, Raymond D
2005-04-01
The contribution of consumer purchases of non-essential products to environmental pollution is characterized. Purchase decisions by consumers induce a complex sequence of economy-wide production interactions that influence the production and consumption of chemicals and subsequent exposure and possible public health risks. An economic input-output life-cycle analysis (EIO-LCA) was used to link resource consumption and production by manufacturers to corresponding environmental impacts. Using the US Department of Commerce's input-output tables together with the US Environmental Protection Agency's Toxics Release Inventory and AIRData databases, the economy-wide air discharges resulting from purchases of household appliances, motor homes, and games and toys were quantified. The economic and environmental impacts generated from a hypothetical 10,000 US dollar purchase for selected consumer items were estimated. The analysis shows how purchases of seemingly benign consumer products increase the output of air pollutants along the supply chain and contribute to the potential risks associated with environmental chemical exposures to both consumers and non-consumers alike.
Thermal sensation prediction by soft computing methodology.
Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor
2016-12-01
Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.
Application of neural based estimation algorithm for gait phases of above knee prosthesis.
Tileylioğlu, E; Yilmaz, A
2015-01-01
In this study, two gait phase estimation methods which utilize a rule based quantization and an artificial neural network model respectively are developed and applied for the microcontroller based semi-active knee prosthesis in order to respond user demands and adapt environmental conditions. In this context, an experimental environment in which gait data collected synchronously from both inertial and image based measurement systems has been set up. The inertial measurement system that incorporates MEM accelerometers and gyroscopes is used to perform direct motion measurement through the microcontroller, while the image based measurement system is employed for producing the verification data and assessing the success of the prosthesis. Embedded algorithms dynamically normalize the input data prior to gait phase estimation. The real time analyses of two methods revealed that embedded ANN based approach performs slightly better in comparison with the rule based algorithm and has advantage of being easily-scalable, thus able to accommodate additional input parameters considering the microcontroller constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Water quality assessment of a small peri-urban river using low and high frequency monitoring.
Ivanovsky, A; Criquet, J; Dumoulin, D; Alary, C; Prygiel, J; Duponchel, L; Billon, G
2016-05-18
The biogeochemical behaviors of small rivers that pass through suburban areas are difficult to understand because of the multi-origin inputs that can modify their behavior. In this context, a monitoring strategy has been designed for the Marque River, located in Lille Metropolitan area of northern France, that includes both low-frequency monitoring over a one-year period (monthly sampling) and high frequency monitoring (measurements every 10 minutes) in spring and summer. Several environmental and chemical parameters are evaluated including rainfall events, river flow, temperature, dissolved oxygen, turbidity, conductivity, nutritive salts and dissolved organic matter. Our results from the Marque River show that (i) it is impacted by both urban and agricultural inputs, and as a consequence, the concentrations of phosphate and inorganic nitrogen have degraded the water quality; (ii) the classic photosynthesis/respiration processes are disrupted by the inputs of organic matter and nutritive salts; (iii) during dry periods, the urban sewage inputs (treated or not) are more important during the day, as indicated by higher river flows and maximal concentrations of ammonium; (iv) phosphate concentrations depend on oxygen contents in the river; (v) high nutrient concentrations result in eutrophication of the Marque River with lower pH and oxygen concentrations in summer. During rainfalls, additional inputs of ammonium, biodegradable organic matter as well as sediment resuspension result in anoxic events; and finally (vi) concentrations of nitrate are approximately constant over the year, except in winter when higher inputs can be recorded. Having better identified the processes responsible for the observed water quality, a more informed remediation effort can be put forward to move this suburban river to a good status of water quality.
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
Sensory synergy as environmental input integration
Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo
2015-01-01
The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler. PMID:25628523
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
From water use to water scarcity footprinting in environmentally extended input-output analysis.
Ridoutt, Bradley George; Hadjikakou, Michalis; Nolan, Martin; Bryan, Brett A
2018-05-18
Environmentally extended input-output analysis (EEIOA) supports environmental policy by quantifying how demand for goods and services leads to resource use and emissions across the economy. However, some types of resource use and emissions require spatially-explicit impact assessment for meaningful interpretation, which is not possible in conventional EEIOA. For example, water use in locations of scarcity and abundance is not environmentally equivalent. Opportunities for spatially-explicit impact assessment in conventional EEIOA are limited because official input-output tables tend to be produced at the scale of political units which are not usually well aligned with environmentally relevant spatial units. In this study, spatially-explicit water scarcity factors and a spatially disaggregated Australian water use account were used to develop water scarcity extensions that were coupled with a multi-regional input-output model (MRIO). The results link demand for agricultural commodities to the problem of water scarcity in Australia and globally. Important differences were observed between the water use and water scarcity footprint results, as well as the relative importance of direct and indirect water use, with significant implications for sustainable production and consumption-related policies. The approach presented here is suggested as a feasible general approach for incorporating spatially-explicit impact assessment in EEIOA.
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
A Spreadsheet Simulation Tool for Terrestrial and Planetary Balloon Design
NASA Technical Reports Server (NTRS)
Raquea, Steven M.
1999-01-01
During the early stages of new balloon design and development, it is necessary to conduct many trade studies. These trade studies are required to determine the design space, and aid significantly in determining overall feasibility. Numerous point designs then need to be generated as details of payloads, materials, mission, and manufacturing are determined. To accomplish these numerous designs, transient models are both unnecessary and time intensive. A steady state model that uses appropriate design inputs to generate system-level descriptive parameters can be very flexible and fast. Just such a steady state model has been developed and has been used during both the MABS 2001 Mars balloon study and the Ultra Long Duration Balloon Project. Using Microsoft Excel's built-in iteration routine, a model was built. Separate sheets were used for performance, structural design, materials, and thermal analysis as well as input and output sheets. As can be seen from figure 1, the model takes basic performance requirements, weight estimates, design parameters, and environmental conditions and generates a system level balloon design. Figure 2 shows a sample output of the model. By changing the inputs and a few of the equations in the model, balloons on earth or other planets can be modeled. There are currently several variations of the model for terrestrial and Mars balloons, as well there are versions of the model that perform crude material design based on strength and weight requirements. To perform trade studies, the Visual Basic language built into Excel was used to create an automated matrix of designs. This trade study module allows a three dimensional trade surface to be generated by using a series of values for any two design variables. Once the fixed and variable inputs are defined, the model automatically steps through the input matrix and fills a spreadsheet with the resulting point designs. The proposed paper will describe the model in detail, including current variations. The assumptions, governing equations, and capabilities will be addressed. Detailed examples of the model in practice will also be used.
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Oblozinsky, P.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Capote,R.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
NASA Technical Reports Server (NTRS)
Stevenson, W. H. (Principal Investigator); Pastula, E. J., Jr.
1973-01-01
The author has identified the following significant results. This 15-month ERTS-1 investigation produced correlations between satellite, aircraft, menhaden fisheries, and environmental sea truth data from the Mississippi Sound. Selected oceanographic, meteorological, and biological parameters were used as indirect indicators of the menhaden resource. Synoptic and near real time sea truth, fishery, satellite imagery, aircraft acquired multispectral, photo and thermal IR information were acquired as data inputs. Computer programs were developed to manipulate these data according to user requirements. Preliminary results indicate a correlation between backscattered light with chlorophyll concentration and water transparency in turbid waters. Eight empirical menhaden distribution models were constructed from combinations of four fisheries-significant oceanographic parameters: water depth, transparency, color, and surface salinity. The models demonstrated their potential for management utilization in areas of resource assessment, prediction, and monitoring.
Viger, Roland J.
2008-01-01
This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
Forty years of Fanger's model of thermal comfort: comfort for all?
van Hoof, J
2008-06-01
The predicted mean vote (PMV) model of thermal comfort, created by Fanger in the late 1960s, is used worldwide to assess thermal comfort. Fanger based his model on college-aged students for use in invariant environmental conditions in air-conditioned buildings in moderate thermal climate zones. Environmental engineering practice calls for a predictive method that is applicable to all types of people in any kind of building in every climate zone. In this publication, existing support and criticism, as well as modifications to the PMV model are discussed in light of the requirements by environmental engineering practice in the 21st century in order to move from a predicted mean vote to comfort for all. Improved prediction of thermal comfort can be achieved through improving the validity of the PMV model, better specification of the model's input parameters, and accounting for outdoor thermal conditions and special groups. The application range of the PMV model can be enlarged, for instance, by using the model to assess the effects of the thermal environment on productivity and behavior, and interactions with other indoor environmental parameters, and the use of information and communication technologies. Even with such modifications to thermal comfort evaluation, thermal comfort for all can only be achieved when occupants have effective control over their own thermal environment. The paper treats the assessment of thermal comfort using the PMV model of Fanger, and deals with the strengths and limitations of this model. Readers are made familiar to some opportunities for use in the 21st-century information society.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
An Overview of the Naval Research Laboratory Ocean Surface Flux (NFLUX) System
NASA Astrophysics Data System (ADS)
May, J. C.; Rowley, C. D.; Barron, C. N.
2016-02-01
The Naval Research Laboratory (NRL) ocean surface flux (NFLUX) system is an end-to-end data processing and assimilation system used to provide near-real time satellite-based surface heat flux fields over the global ocean. Swath-level air temperature (TA), specific humidity (QA), and wind speed (WS) estimates are produced using multiple polynomial regression algorithms with inputs from satellite sensor data records from the Special Sensor Microwave Imager/Sounder, the Advanced Microwave Sounding Unit-A, the Advanced Technology Microwave Sounder, and the Advanced Microwave Scanning Radiometer-2 sensors. Swath-level WS estimates are also retrieved from satellite environmental data records from WindSat, the MetOp scatterometers, and the Oceansat scatterometer. Swath-level solar and longwave radiative flux estimates are produced utilizing the Rapid Radiative Transfer Model for Global Circulation Models (RRTMG). Primary inputs to the RRTMG include temperature and moisture profiles and cloud liquid and ice water paths from the Microwave Integrated Retrieval System. All swath-level satellite estimates undergo an automated quality control process and are then assimilated with atmospheric model forecasts to produce 3-hourly gridded analysis fields. The turbulent heat flux fields, latent and sensible heat flux, are determined from the Coupled Ocean-Atmosphere Response Experiment (COARE) 3.0 bulk algorithms using inputs of TA, QA, WS, and a sea surface temperature model field. Quality-controlled in situ observations over a one-year time period from May 2013 through April 2014 form the reference for validating ocean surface state parameter and heat flux fields. The NFLUX fields are evaluated alongside the Navy's operational global atmospheric model, the Navy Global Environmental Model (NAVGEM). NFLUX is shown to have smaller biases and lower or similar root mean square errors compared to NAVGEM.
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 75.83 - Calculation of Hg mass emissions and heat input rate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... heat input rate. 75.83 Section 75.83 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Calculation of Hg mass emissions and heat input rate. The owner or operator shall calculate Hg mass emissions and heat input rate in accordance with the procedures in sections 9.1 through 9.3 of appendix F to...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Application of artificial neural networks to assess pesticide contamination in shallow groundwater
Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.
2006-01-01
In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
[Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].
Zhao, Nan; Liu, Yi; Chen, Ji-Ning
2009-11-01
In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.
Kirschner, Alexander K T; Zechmeister, Thomas C; Kavka, Gerhard G; Beiwl, Christian; Herzig, Alois; Mach, Robert L; Farnleitner, Andreas H
2004-12-01
Wild birds are an important nonpoint source of fecal contamination of surface waters, but their contribution to fecal pollution is mostly difficult to estimate. Thus, to evaluate the relation between feces production and input of fecal indicator bacteria (FIB) into aquatic environments by wild waterfowl, we introduced a new holistic approach for evaluating the performance of FIB in six shallow saline habitats. For this, we monitored bird abundance, fecal pellet production, and the abundance of FIB concomitantly with a set of environmental variables over a 9-month period. For estimating fecal pellet production, a new protocol of fecal pellet counting was introduced, which was called fecal taxation (FTX). We could show that, over the whole range of investigated habitats, bird abundance, FTX values, and FIB abundance were highly significantly correlated and could demonstrate the good applicability of the FTX as a meaningful surrogate parameter for recent bird abundances and fecal contamination by birds in shallow aquatic ecosystems. Presumptive enterococci (ENT) were an excellent surrogate parameter of recent fecal contamination in these saline environments for samples collected at biweekly to monthly sampling intervals while presumptive Escherichia coli and fecal coliforms (FC) were often undetectable. Significant negative correlations with salinity indicated that E. coli and FC survival was hampered by osmotic stress. Statistical analyses further revealed that fecal pollution-associated parameters represented one system component independent from other environmental variables and that, besides feces production, rainfall, total suspended solids (direct), and trophy (indirect) had significant positive effects on ENT concentrations. Our holistic approach of linking bird abundance, feces production, and FIB detection with environmental variables may serve as a powerful model for application to other aquatic ecosystems.
Kirschner, Alexander K. T.; Zechmeister, Thomas C.; Kavka, Gerhard G.; Beiwl, Christian; Herzig, Alois; Mach, Robert L.; Farnleitner, Andreas H.
2004-01-01
Wild birds are an important nonpoint source of fecal contamination of surface waters, but their contribution to fecal pollution is mostly difficult to estimate. Thus, to evaluate the relation between feces production and input of fecal indicator bacteria (FIB) into aquatic environments by wild waterfowl, we introduced a new holistic approach for evaluating the performance of FIB in six shallow saline habitats. For this, we monitored bird abundance, fecal pellet production, and the abundance of FIB concomitantly with a set of environmental variables over a 9-month period. For estimating fecal pellet production, a new protocol of fecal pellet counting was introduced, which was called fecal taxation (FTX). We could show that, over the whole range of investigated habitats, bird abundance, FTX values, and FIB abundance were highly significantly correlated and could demonstrate the good applicability of the FTX as a meaningful surrogate parameter for recent bird abundances and fecal contamination by birds in shallow aquatic ecosystems. Presumptive enterococci (ENT) were an excellent surrogate parameter of recent fecal contamination in these saline environments for samples collected at biweekly to monthly sampling intervals while presumptive Escherichia coli and fecal coliforms (FC) were often undetectable. Significant negative correlations with salinity indicated that E. coli and FC survival was hampered by osmotic stress. Statistical analyses further revealed that fecal pollution-associated parameters represented one system component independent from other environmental variables and that, besides feces production, rainfall, total suspended solids (direct), and trophy (indirect) had significant positive effects on ENT concentrations. Our holistic approach of linking bird abundance, feces production, and FIB detection with environmental variables may serve as a powerful model for application to other aquatic ecosystems. PMID:15574941
Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates
NASA Technical Reports Server (NTRS)
Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.
1997-01-01
Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less
NASA Astrophysics Data System (ADS)
Ginting, N.
2018-02-01
In Samosir Island, Indonesia pigs care was not environmentally friendly as people were used firewood in pig feed preparation. A series of research has been conducted from March until September 2017 which was preceded by survey. It was found that people cut tree for firewood. As Samosir Island was under Toba Go Green Project which was a tree planting project so feed pig preparation was in contrast to the project. More over, Indonesia has been committed to reduce its green house gases (GHG) by 26% in 2020, any mitigation on GHG was strongly recommended. One way of mitigation in Samosir was by installing biogas for pig feed preparation. 5 biodigesters 500 liters capacity each were installed in Parbaba Village, Samosir Island and biodigester input were pig manure, water hyacinth. Research design was Randomized Completely Design. Parameters were gas production, pH, temperature and C/N ratio. Biogas than used to cook feed pig. It was known that to cook for 5 finisher pigs, 3 kg firewood could be substituted by 250 liters of biogas.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Guo, Rui; Tao, Lan; Yan, Liang; Chen, Lianfang; Wang, Haijun
2014-09-01
From corporate internal governance structure and external institutional environment, this study uses a legitimacy perspective of intuitional theory to analyze the main influence factors on corporate environmental protection inputs and propose some hypotheses. With the establishment of empirical models, it analyzes the data of 2004-2009 listed biological and other companies in China to test the hypotheses. The findings are concluded that in internal institutional environment, the nature of the controlling shareholder, the proportion of the first shareholder in the ownership structure, the combination of chairman and general manager in board efficiency and the intensity of environmental laws and regulations of the industry in external institutional environment have an significant impact on the behaviors of corporate environmental protection inputs.
Akintola, Olayiwola Akin; Sangodoyin, Abimbola Yisau; Agunbiade, Foluso Oyedotun
2018-05-24
We present a modelling concept for evaluating the impacts of anthropogenic activities suspected to be from gas flaring on the quality of the atmosphere using domestic roof-harvested rainwater (DRHRW) as indicator. We analysed seven metals (Cu, Cd, Pb, Zn, Fe, Ca, and Mg) and six water quality parameters (acidity, PO 4 3- , SO 4 2- , NO 3 - , Cl - , and pH). These were used as input parameters in 12 sampling points from gas-flaring environments (Port Harcourt, Nigeria) using Ibadan as reference. We formulated the results of these input parameters into membership function fuzzy matrices based on four degrees of impact: extremely high, high, medium, and low, using regulatory limits as criteria. We generated indices that classified the degree of anthropogenic activity impact on the sites from the product membership function matrices and weight matrices, with investigated (gas-flaring) environment as between medium and high impact compared to those from reference (residential) environment that was classified as between low and medium impact. Major contaminants of concern found in the harvested rainwater were Pb and Cd. There is also the urgent need to stop gas-flaring activities in Port Harcourt area in particular and Niger Delta region of Nigeria in general, so as to minimise the untold health hazard that people living in the area are currently faced with. The fuzzy methodology presented has also indicated that the water cannot safely support potable uses and should not be consumed without purification due to the impact of anthropogenic activities in the area but may be useful for other domestic purposes.
Knowledge system and method for simulating chemical controlled release device performance
Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.
1991-01-01
A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Baken, Stijn; Degryse, Fien; Verheyen, Liesbeth; Merckx, Roel; Smolders, Erik
2011-04-01
Dissolved organic matter (DOM) in surface waters affects the fate and environmental effects of trace metals. We measured variability in the Cd, Cu, Ni, and Zn affinity of 23 DOM samples isolated by reverse osmosis from freshwaters in natural, agricultural, and urban areas. Affinities at uniform pH and ionic composition were assayed at low, environmentally relevant free Cd, Cu, Ni, and Zn activities. The C-normalized metal binding of DOM varied 4-fold (Cu) or about 10-fold (Cd, Ni, Zn) among samples. The dissolved organic carbon concentration ranged only 9-fold in the waters, illustrating that DOM quality is an equally important parameter for metal complexation as DOM quantity. The UV-absorbance of DOM explained metal affinity only for waters receiving few urban inputs, indicating that in those waters, aromatic humic substances are the dominant metal chelators. Larger metal affinities were found for DOM from waters with urban inputs. Aminopolycarboxylate ligands (mainly EDTA) were detected at concentrations up to 0.14 μM and partly explained the larger metal affinity. Nickel concentrations in these surface waters are strongly related to EDTA concentrations (R2=0.96) and this is underpinned by speciation calculations. It is concluded that metal complexation in waters with anthropogenic discharges is larger than that estimated with models that only take into account binding on humic substances.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
NASA Astrophysics Data System (ADS)
Termini, Donatella
2013-04-01
Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77
Indexing the Environmental Quality Performance Based on A Fuzzy Inference Approach
NASA Astrophysics Data System (ADS)
Iswari, Lizda
2018-03-01
Environmental performance strongly deals with the quality of human life. In Indonesia, this performance is quantified through Environmental Quality Index (EQI) which consists of three indicators, i.e. river quality index, air quality index, and coverage of land cover. The current of this instrument data processing was done by averaging and weighting each index to represent the EQI at the provincial level. However, we found EQI interpretations that may contain some uncertainties and have a range of circumstances possibly less appropriate if processed under a common statistical approach. In this research, we aim to manage the indicators of EQI with a more intuitive computation technique and make some inferences related to the environmental performance in 33 provinces in Indonesia. Research was conducted in three stages of Mamdani Fuzzy Inference System (MAFIS), i.e. fuzzification, data inference, and defuzzification. Data input consists of 10 environmental parameters and the output is an index of Environmental Quality Performance (EQP). Research was applied to the environmental condition data set in 2015 and quantified the results into the scale of 0 to 100, i.e. 10 provinces at good performance with the EQP above 80 dominated by provinces in eastern part of Indonesia, 22 provinces with the EQP between 80 to 50, and one province in Java Island with the EQP below 20. This research shows that environmental quality performance can be quantified without eliminating the natures of the data set and simultaneously is able to show the environment behavior along with its spatial pattern distribution.
Higher Plants in life support systems: design of a model and plant experimental compartment
NASA Astrophysics Data System (ADS)
Hezard, Pauline; Farges, Berangere; Sasidharan L, Swathy; Dussap, Claude-Gilles
The development of closed ecological life support systems (CELSS) requires full control and efficient engineering for fulfilling the common objectives of water and oxygen regeneration, CO2 elimination and food production. Most of the proposed CELSS contain higher plants, for which a growth chamber and a control system are needed. Inside the compartment the development of higher plants must be understood and modeled in order to be able to design and control the compartment as a function of operating variables. The plant behavior must be analyzed at different sub-process scales : (i) architecture and morphology describe the plant shape and lead to calculate the morphological parameters (leaf area, stem length, number of meristems. . . ) characteristic of life cycle stages; (ii) physiology and metabolism of the different organs permit to assess the plant composition depending on the plant input and output rates (oxygen, carbon dioxide, water and nutrients); (iii) finally, the physical processes are light interception, gas exchange, sap conduction and root uptake: they control the available energy from photosynthesis and the input and output rates. These three different sub-processes are modeled as a system of equations using environmental and plant parameters such as light intensity, temperature, pressure, humidity, CO2 and oxygen partial pressures, nutrient solution composition, total leaf surface and leaf area index, chlorophyll content, stomatal conductance, water potential, organ biomass distribution and composition, etc. The most challenging issue is to develop a comprehensive and operative mathematical model that assembles these different sub-processes in a unique framework. In order to assess the parameters for testing a model, a polyvalent growth chamber is necessary. It should permit a controlled environment in order to test and understand the physiological response and determine the control strategy. The final aim of this model is to have an envi-ronmental control of plant behavior: this requires an extended knowledge of plant response to environment variations. This needs a large number of experiments, which would be easier to perform in a high-throughput system.
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
Estimating Unsaturated Zone N Fluxes and Travel Times to Groundwater at Watershed Scales
NASA Astrophysics Data System (ADS)
Liao, L.; Green, C. T.; Harter, T.; Nolan, B. T.; Juckem, P. F.; Shope, C. L.
2016-12-01
Nitrate concentrations in groundwater vary at spatial and temporal scales. Local variability depends on soil properties, unsaturated zone properties, hydrology, reactivity, and other factors. For example, the travel time in the unsaturated zone can cause contaminant responses in aquifers to lag behind changes in N inputs at the land surface, and variable leaching-fractions of applied N fertilizer to groundwater can elevate (or reduce) concentrations in groundwater. In this study, we apply the vertical flux model (VFM) (Liao et al., 2012) to address the importance of travel time of N in the unsaturated zone and its fraction leached from the unsaturated zone to groundwater. The Fox-Wolf-Peshtigo basins, including 34 out of 72 counties in Wisconsin, were selected as the study area. Simulated concentrations of NO3-, N2 from denitrification, O2, and environmental tracers of groundwater age were matched to observations by adjusting parameters for recharge rate, unsaturated zone travel time, fractions of N inputs leached to groundwater, O2 reduction rate, O2 threshold for denitrification, denitrification rate, and dispersivity. Correlations between calibrated parameters and GIS parameters (land use, drainage class and soil properties etc.) were evaluated. Model results revealed a median of recharge rate of 0.11 m/yr, which is comparable with results from three independent estimates of recharge rates in the study area. The unsaturated travel times ranged from 0.2 yr to 25 yr with median of 6.8 yr. The correlation analysis revealed that relationships between VFM parameters and landscape characteristics (GIS parameters) were consistent with expected relationships. Fraction N leached was lower in the vicinity of wetlands and greater in the vicinity of crop lands. Faster unsaturated zone transport in forested areas was consistent with results of studies showing rapid vertical transport in forested soils. Reaction rate coefficients correlated with chemical indicators such as Fe and P concentrations. Overall, the results demonstrate applicability of the VFM at a regional scale, as well as potential to generate N transport estimates continuously across regions based on statistical relationships between VFM model parameters and GIS parameters.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
NASA Astrophysics Data System (ADS)
Fong, Bernicy S.; Davies, Murray; Deschamps, Pierre
2018-01-01
Timing resolution (or timing jitter) and time walk are separate parameters associated with a detector's response time. Studies have been done mostly on the time resolution of various single-photon detectors. As the designer and manufacturer of the ultra-low noise (ƙ-factor) silicon avalanche photodiode the super low K factor (SLiK) single-photon avalanche diode (SPAD), which is used in many single-photon counting applications, we often get inquiries from customers to better understand how this detector behaves under different operating conditions. Hence, here, we will be focusing on the study of these time-related parameters specifically for the SLiK SPAD, as a way to provide the most direct information for users of this detector to help with its use more efficiently and effectively. We will be providing the study data on how these parameters can be affected by temperature (both intrinsic to the detector chip and environmental input based on operating conditions), operating voltage, photon wavelength, as well as light spot size. How these parameters can be optimized and the trade-offs from optimization from the desired performance will be presented?
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
Use-Exposure Relationships of Pesticides for Aquatic Risk Assessment
Luo, Yuzhou; Spurlock, Frank; Deng, Xin; Gill, Sheryl; Goh, Kean
2011-01-01
Field-scale environmental models have been widely used in aquatic exposure assessments of pesticides. Those models usually require a large set of input parameters and separate simulations for each pesticide in evaluation. In this study, a simple use-exposure relationship is developed based on regression analysis of stochastic simulation results generated from the Pesticide Root-Zone Model (PRZM). The developed mathematical relationship estimates edge-of-field peak concentrations of pesticides from aerobic soil metabolism half-life (AERO), organic carbon-normalized soil sorption coefficient (KOC), and application rate (RATE). In a case study of California crop scenarios, the relationships explained 90–95% of the variances in the peak concentrations of dissolved pesticides as predicted by PRZM simulations for a 30-year period. KOC was identified as the governing parameter in determining the relative magnitudes of pesticide exposures in a given crop scenario. The results of model application also indicated that the effects of chemical fate processes such as partitioning and degradation on pesticide exposure were similar among crop scenarios, while the cross-scenario variations were mainly associated with the landscape characteristics, such as organic carbon contents and curve numbers. With a minimum set of input data, the use-exposure relationships proposed in this study could be used in screening procedures for potential water quality impacts from the off-site movement of pesticides. PMID:21483772
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Informing soil models using pedotransfer functions: challenges and perspectives
NASA Astrophysics Data System (ADS)
Pachepsky, Yakov; Romano, Nunzio
2015-04-01
Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.
Solar energy system economic evaluation: IBM System 4, Clinton, Mississippi
NASA Technical Reports Server (NTRS)
1980-01-01
An economic analysis of the solar energy system was developed for five sites, typical of a wide range of environmental and economic conditions in the continental United States. The analysis was based on the technical and economic models in the F-chart design procedure, with inputs based on the characteristic of the installed system and local conditions. The results are of the economic parameters of present worth of system cost over a 20 year time span: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated.
Modeling of SAR signatures of shallow water ocean topography
NASA Technical Reports Server (NTRS)
Shuchman, R. A.; Kozma, A.; Kasischke, E. S.; Lyzenga, D. R.
1984-01-01
A hydrodynamic/electromagnetic model was developed to explain and quantify the relationship between the SEASAT synthetic aperture radar (SAR) observed signatures and the bottom topography of the ocean in the English Channel region of the North Sea. The model uses environmental data and radar system parameters as inputs and predicts SAR-observed backscatter changes over topographic changes in the ocean floor. The model results compare favorably with the actual SEASAT SAR observed backscatter values. The developed model is valid for only relatively shallow water areas (i.e., less than 50 meters in depth) and suggests that for bottom features to be visible on SAR imagery, a moderate to high velocity current and a moderate wind must be present.
Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance
NASA Astrophysics Data System (ADS)
Kornfeld, Gertrude H.
1987-09-01
Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.
Self-tuning control of attitude and momentum management for the Space Station
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Sunkel, J. W.; Yuan, Z. Z.; Zhao, X. M.
1992-01-01
This paper presents a hybrid state-space self-tuning design methodology using dual-rate sampling for suboptimal digital adaptive control of attitude and momentum management for the Space Station. This new hybrid adaptive control scheme combines an on-line recursive estimation algorithm for indirectly identifying the parameters of a continuous-time system from the available fast-rate sampled data of the inputs and states and a controller synthesis algorithm for indirectly finding the slow-rate suboptimal digital controller from the designed optimal analog controller. The proposed method enables the development of digitally implementable control algorithms for the robust control of Space Station Freedom with unknown environmental disturbances and slowly time-varying dynamics.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Update on ɛK with lattice QCD inputs
NASA Astrophysics Data System (ADS)
Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon
2018-03-01
We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
NASA Astrophysics Data System (ADS)
Kline, K. L.; Eaton, L. M.; Efroymson, R.; Davis, M. R.; Dunn, J.; Langholtz, M. H.
2016-12-01
The federal government, led by the U.S. Department of Energy (DOE), quantified potential U.S. biomass resources for expanded production of renewable energy and bioproducts in the 2016 Billion-Ton Report: Advancing Domestic Resources for a Thriving Bioeconomy (BT16) (DOE 2016). Volume 1 of the report provides analysis of projected supplies from 2015 to2040. Volume 2 (forthcoming) evaluates changes in environmental indicators for water quality and quantity, carbon, air quality, and biodiversity associated with production scenarios in BT16 volume 1. This presentation will review land-use allocations under the projected biomass production scenarios and the changes in land management that are implied, including drivers of direct and indirect LUC. National and global concerns such as deforestation and displacement of food production are addressed. The choice of reference scenario, input parameters and constraints (e.g., regarding land classes, availability, and productivity) drive LUC results in any model simulation and are reviewed to put BT16 impacts into context. The principal LUC implied in BT16 supply scenarios involves the transition of 25-to-47 million acres (net) from annual crops in 2015 baseline to perennial cover by 2040 under the base case and 3% yield growth case, respectively. We conclude that clear definitions of land parameters and effects are essential to assess LUC. A lack of consistency in parameters and outcomes of historic LUC analysis in the U.S. underscores the need for science-based approaches.
Landslide model performance in a high resolution small-scale landscape
NASA Astrophysics Data System (ADS)
De Sy, V.; Schoorl, J. M.; Keesstra, S. D.; Jones, K. E.; Claessens, L.
2013-05-01
The frequency and severity of shallow landslides in New Zealand threatens life and property, both on- and off-site. The physically-based shallow landslide model LAPSUS-LS is tested for its performance in simulating shallow landslide locations induced by a high intensity rain event in a small-scale landscape. Furthermore, the effect of high resolution digital elevation models on the performance was tested. The performance of the model was optimised by calibrating different parameter values. A satisfactory result was achieved with a high resolution (1 m) DEM. Landslides, however, were generally predicted lower on the slope than mapped erosion scars. This discrepancy could be due to i) inaccuracies in the DEM or in other model input data such as soil strength properties; ii) relevant processes for this environmental context that are not included in the model; or iii) the limited validity of the infinite length assumption in the infinite slope stability model embedded in the LAPSUS-LS. The trade-off between a correct prediction of landslides versus stable cells becomes increasingly worse with coarser resolutions; and model performance decreases mainly due to altering slope characteristics. The optimal parameter combinations differ per resolution. In this environmental context the 1 m resolution topography resembles actual topography most closely and landslide locations are better distinguished from stable areas than for coarser resolutions. More gain in model performance could be achieved by adding landslide process complexities and parameter heterogeneity of the catchment.
DATA FOR ENVIRONMENTAL MODELING: AN OVERVIEW
The objective of the project described here, entitled Data for Environmental Modeling (D4EM), is the development of a comprehensive set of software tools that allow an environmental model developer to automatically populate model input files with environmental data available from...
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
DOE-EPSCOR SPONSORED PROJECT FINAL REPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jianting
Concern over the quality of environmental management and restoration has motivated the model development for predicting water and solute transport in the vadose zone. Soil hydraulic properties are required inputs to subsurface models of water flow and contaminant transport in the vadose zone. Computer models are now routinely used in research and management to predict the movement of water and solutes into and through the vadose zone of soils. Such models can be used successfully only if reliable estimates of the soil hydraulic parameters are available. The hydraulic parameters considered in this project consist of the saturated hydraulic conductivity andmore » four parameters of the water retention curves. To quantify hydraulic parameters for heterogeneous soils is both difficult and time consuming. The overall objective of this project was to better quantify soil hydraulic parameters which are critical in predicting water flows and contaminant transport in the vadose zone through a comprehensive and quantitative study to predict heterogeneous soil hydraulic properties and the associated uncertainties. Systematic and quantitative consideration of the parametric heterogeneity and uncertainty can properly address and further reduce predictive uncertainty for contamination characterization and environmental restoration at DOE-managed sites. We conducted a comprehensive study to assess soil hydraulic parameter heterogeneity and uncertainty. We have addressed a number of important issues related to the soil hydraulic property characterizations. The main focus centered on new methods to characterize anisotropy of unsaturated hydraulic property typical of layered soil formations, uncertainty updating method, and artificial neural network base pedo-transfer functions to predict hydraulic parameters from easily available data. The work also involved upscaling of hydraulic properties applicable to large scale flow and contaminant transport modeling in the vadose zone and geostatistical characterization of hydraulic parameter heterogeneity. The project also examined the validity of the some simple average schemes for unsaturated hydraulic properties widely used in previous studies. A new suite of pedo-transfer functions were developed to improve the predictability of hydraulic parameters. We also explored the concept of tension-dependent hydraulic conductivity anisotropy of unsaturated layered soils. This project strengthens collaboration between researchers at the Desert Research Institute in the EPSCoR State of Nevada and their colleagues at the Pacific Northwest National Laboratory. The results of numerical simulations of a field injection experiment at Hanford site in this project could be used to provide insights to the DOE mission of appropriate contamination characterization and environmental remediation.« less
EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) ...
Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric conditions yielding sub-optimal values, or complete non-returns. The United States Environmental Protection Agency’s Exposure Methods and Measurements and Computational Exposure Divisions are investigating the viability of supplemental modelled LAI inputs into satellite-derived data streams to support various regional and local scale air quality models for retrospective and future climate assessments. In this present study, one-year (2002) of plot level stand characteristics at four study sites located in Virginia and North Carolina are used to calibrate species-specific plant parameters in a semi-empirical biogeochemical model. The Environmental Policy Integrated Climate (EPIC) model was designed primarily for managed agricultural field crop ecosystems, but also includes managed woody species that span both xeric and mesic sites (e.g., mesquite, pine, oak, etc.). LAI was simulated using EPIC at a 4 km2 and 12 km2 grid coincident with the regional Community Multiscale Air Quality Model (CMAQ) grid. LAI comparisons were made between model-simulated and MODIS-derived LAI. Field/satellite-upscaled LAI was also compared to the corresponding MODIS LAI value. Preliminary results show field/satel
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Dual side control for inductive power transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron
An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less
Energy Productivity of the High Velocity Algae Raceway Integrated Design (ARID-HV)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attalah, Said; Waller, Peter M.; Khawam, George
The original Algae Raceway Integrated Design (ARID) raceway was an effective method to increase algae culture temperature in open raceways. However, the energy input was high and flow mixing was poor. Thus, the High Velocity Algae Raceway Integrated Design (ARID-HV) raceway was developed to reduce energy input requirements and improve flow mixing in a serpentine flow path. A prototype ARID-HV system was installed in Tucson, Arizona. Based on algae growth simulation and hydraulic analysis, an optimal ARID-HV raceway was designed, and the electrical energy input requirement (kWh ha-1 d-1) was calculated. An algae growth model was used to compare themore » productivity of ARIDHV and conventional raceways. The model uses a pond surface energy balance to calculate water temperature as a function of environmental parameters. Algae growth and biomass loss are calculated based on rate constants during day and night, respectively. A 10 year simulation of DOE strain 1412 (Chlorella sorokiniana) showed that the ARID-HV raceway had significantly higher production than a conventional raceway for all months of the year in Tucson, Arizona. It should be noted that this difference is species and climate specific and is not observed in other climates and with other algae species. The algae growth model results and electrical energy input evaluation were used to compare the energy productivity (algae production rate/energy input) of the ARID-HV and conventional raceways for Chlorella sorokiniana in Tucson, Arizona. The energy productivity of the ARID-HV raceway was significantly greater than the energy productivity of a conventional raceway for all months of the year.« less
NASA Technical Reports Server (NTRS)
Camp, D. W.; Frost, W.; Coons, F.; Evanich, P.; Sprinkle, C. H.
1984-01-01
The six workshops whose proceedings are presently reported considered the subject of meteorological and environmental information inputs to aviation, in order to satisfy workshop-sponsoring agencies' requirements for (1) greater knowledge of the interaction of the atmosphere with aircraft and airport operators, (2) a better definition and implementation of meteorological services to operators, and (3) the collection and interpretation of data useful in establishing operational criteria that relate the atmospheric science input to aviation community operations. Workshop topics included equipment and instrumentation, forecasts and information updates, training and simulation facilities, and severe weather, icing and wind shear.
NASA Astrophysics Data System (ADS)
Sahajpal, R.; Hurtt, G. C.; Fisk, J. P.; Izaurralde, R. C.; Zhang, X.
2012-12-01
While cellulosic biofuels are widely considered to be a low carbon energy source for the future, a comprehensive assessment of the environmental sustainability of existing and future biofuel systems is needed to assess their utility in meeting US energy and food needs without exacerbating environmental harm. To assess the carbon emission reduction potential of cellulosic biofuels, we need to identify lands that are initially not storing large quantities of carbon in soil and vegetation but are capable of producing abundant biomass with limited management inputs, and accurately model forest production rates and associated input requirements. Here we present modeled results for carbon emission reduction potential and cellulosic ethanol production of woody bioenergy crops replacing existing native prairie vegetation grown on marginal lands in the US Midwest. Marginal lands are selected based on soil properties describing use limitation, and are extracted from the SSURGO (Soil Survey Geographic) database. Yield estimates for existing native prairie vegetation on marginal lands modeled using the process-based field-scale model EPIC (Environmental Policy Integrated Climate) amount to ~ 6.7±2.0 Mg ha-1. To model woody bioenergy crops, the individual-based terrestrial ecosystem model ED (Ecosystem Demography) is initialized with the soil organic carbon stocks estimated at the end of the EPIC simulation. Four woody bioenergy crops: willow, southern pine, eucalyptus and poplar are parameterized in ED. Sensitivity analysis of model parameters and drivers is conducted to explore the range of carbon emission reduction possible with variation in woody bioenergy crop types, spatial and temporal resolution. We hypothesize that growing cellulosic crops on these marginal lands can provide significant water quality, biodiversity and GHG emissions mitigation benefits, without accruing additional carbon costs from the displacement of food and feed production.
Life cycle assessment of different sea cucumber ( Apostichopus japonicus Selenka) farming systems
NASA Astrophysics Data System (ADS)
Wang, Guodong; Dong, Shuanglin; Tian, Xiangli; Gao, Qinfeng; Wang, Fang; Xu, Kefeng
2015-12-01
The life cycle assessment was employed to evaluate the environmental impacts of three farming systems (indoor intensive, semi-intensive and extensive systems) of sea cucumber living near Qingdao, China, which can effectively overcome the interference of inaccurate background parameters caused by the diversity of economic level and environment in different regions. Six indicators entailing global warming potential (1.86E + 04, 3.45E + 03, 2.36E + 02), eutrophication potential (6.65E + 01, -1.24E + 02, -1.65E + 02), acidification potential (1.93E + 02, 4.33E + 01, 1.30E + 00), photochemical oxidant formation potential (2.35E-01, 5.46E -02, 2.53E-03), human toxicity potential (2.47E + 00, 6.08E-01, 4.91E + 00) and energy use (3.36E + 05, 1.27E + 04, 1.48E + 03) were introduced in the current study. It was found that all environmental indicators in the indoor intensive farming system were much higher than those in semi-intensive and extensive farming systems because of the dominant role of energy input, while energy input also contributed as the leading cause factor for most of the indicators in the semi-intensive farming system. Yet in the extensive farming system, infrastructure materials played a major role. Through a comprehensive comparison of the three farming systems, it was concluded that income per unit area of indoor intensive farming system was much higher than those of semi-intensive and extensive farming systems. However, the extensive farming system was the most sustainable one. Moreover, adequate measures were proposed, respectively, to improve the environmental sustainability of each farming system in the present study.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
System Environmental Outreach Feature Stories Individual Permit for Storm Water Public Reading Room calendar to provide input on the subject matter. Visit the Public Reading Rooms and study our environmental Mitigating Wildland Fires Public Reading Room: Environmental Documents, Reports Los Alamos National
77 FR 47337 - Project-Level Predecisional Administrative Review Process
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
... upholding environmental standards and encouraging early public input during planning processes. One of the... when they are categorically excluded from analysis under the National Environmental Policy Act (NEPA... applicable to projects and activities evaluated in an environmental analysis (EA) or environmental impact...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, G. Tim; Hartman, Larry; Stagich, Brooke
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, T.; Stagich, B.
Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
Jakkamsetti, Vikram; Chang, Kevin Q.
2012-01-01
Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375
Hydrological models as web services: Experiences from the Environmental Virtual Observatory project
NASA Astrophysics Data System (ADS)
Buytaert, W.; Vitolo, C.; Reaney, S. M.; Beven, K.
2012-12-01
Data availability in environmental sciences is expanding at a rapid pace. From the constant stream of high-resolution satellite images to the local efforts of citizen scientists, there is an increasing need to process the growing stream of heterogeneous data and turn it into useful information for decision-making. Environmental models, ranging from simple rainfall - runoff relations to complex climate models, can be very useful tools to process data, identify patterns, and help predict the potential impact of management scenarios. Recent technological innovations in networking, computing and standardization may bring a new generation of interactive models plugged into virtual environments closer to the end-user. They are the driver of major funding initiatives such as the UK's Virtual Observatory program, and the U.S. National Science Foundation's Earth Cube. In this study we explore how hydrological models, being an important subset of environmental models, have to be adapted in order to function within a broader environment of web-services and user interactions. Historically, hydrological models have been developed for very different purposes. Typically they have a rigid model structure, requiring a very specific set of input data and parameters. As such, the process of implementing a model for a specific catchment requires careful collection and preparation of the input data, extensive calibration and subsequent validation. This procedure seems incompatible with a web-environment, where data availability is highly variable, heterogeneous and constantly changing in time, and where the requirements of end-users may be not necessarily align with the original intention of the model developer. We present prototypes of models that are web-enabled using the web standards of the Open Geospatial Consortium, and implemented in online decision-support systems. We identify issues related to (1) optimal use of available data; (2) the need for flexible and adaptive structures; (3) quantification and communication of uncertainties. Lastly, we present some road maps to address these issues and discuss them in the broader context of web-based data processing and "big data" science.
Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang
2011-10-01
This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Modeling of surface dust concentrations using neural networks and kriging
NASA Astrophysics Data System (ADS)
Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.
2016-12-01
Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.
Desktop Application Program to Simulate Cargo-Air-Drop Tests
NASA Technical Reports Server (NTRS)
Cuthbert, Peter
2009-01-01
The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
Meter circuit for tuning RF amplifiers
NASA Technical Reports Server (NTRS)
Longthorne, J. E.
1973-01-01
Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.
Environmental performances of Sardinian dairy sheep production systems at different input levels.
Vagnoni, E; Franca, A; Breedveld, L; Porqueddu, C; Ferrara, R; Duce, P
2015-01-01
Although sheep milk production is a significant sector for the European Mediterranean countries, it shows serious competitiveness gaps. Minimizing the ecological impacts of dairy sheep farming systems could represent a key factor for farmers to bridging the gaps in competitiveness of such systems and also obtaining public incentives. However, scarce is the knowledge about the environmental performance of Mediterranean dairy sheep farms. The main objectives of this paper were (i) to compare the environmental impacts of sheep milk production from three dairy farms in Sardinia (Italy), characterized by different input levels, and (ii) to identify the hotspots for improving the environmental performances of each farm, by using a Life Cycle Assessment (LCA) approach. The LCA was conducted using two different assessment methods: Carbon Footprint-IPCC and ReCiPe end-point. The analysis, conducted "from cradle to gate", was based on the functional unit 1 kg of Fat and Protein Corrected Milk (FPCM). The observed trends of the environmental performances of the studied farming systems were similar for both evaluation methods. The GHG emissions revealed a little range of variation (from 2.0 to 2.3 kg CO2-eq per kg of FPCM) with differences between farming systems being not significant. The ReCiPe end-point analysis showed a larger range of values and environmental performances of the low-input farm were significantly different compared to the medium- and high-input farms. In general, enteric methane emissions, field operations, electricity and production of agricultural machineries were the most relevant processes in determining the overall environmental performances of farms. Future research will be dedicated to (i) explore and better define the environmental implications of the land use impact category in the Mediterranean sheep farming systems, and (ii) contribute to revising and improving the existing LCA dataset for Mediterranean farming systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G
2011-08-01
Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Starkey, A. H.; Icerman, L.
1984-08-01
The environmental effects associated with the operation of a privately owned Rankine-cycle turbogenerator unit using low temperature geothermal resources in the form of free-flowing hot springs to produce electricity in a remote, rural area were studied. The following conclusions pertain to the operation of the turbogenerator system: (1) the heat exchanger could not provide sufficient freon vapor at the required pressures to provide adequate thermal input to the turbine; (2) conversion or redesign of the condenser and return pump to function adequately represents a problem of unknown difficulty; (3) all pressure and heat transfer tests indicated that a custom designed heat exchanger built on-site would provide adequate vapor at pressures high enough to power a 10-kW (sub e) or perhaps larger generator; and (4) automated control systems are needed for the hot and cold water supplies and the freon return pump.
Noise suppression methods for robust speech processing
NASA Astrophysics Data System (ADS)
Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.
1980-05-01
Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.
2016-12-01
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.; Mills, F. P.
2014-08-01
Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
Coelho, André; de Brito, Jorge
2013-01-01
Part I of this study deals with the primary energy consumption and CO(2)eq emissions of a 350 tonnes/h construction and demolition waste (CDW) recycling facility, taking into account incorporated, operation and transportation impacts. It concludes that the generated impacts are mostly concentrated in operation and transportation, and that the impacts prevented through material recycling can be up to one order of magnitude greater than those generated. However, the conditions considered for the plant's operation and related transportation system may, and very likely will, vary in the near future, which will affect its environmental performance. This performance is particularly affected by the plant's installed capacity, transportation fuel and input CDW mass. In spite of the variations in overall primary energy and CO(2)eq balances, the prevented impacts are always higher than the generated impacts, at least by a factor of three and maybe even as high as 16 times in particular conditions. The analysis indicates environmental performance for variations in single parameters, except for the plant's capacity, which was considered to vary simultaneously with all the others. Extreme best and worst scenarios were also generated to fit the results into extreme limits. Copyright © 2012 Elsevier Ltd. All rights reserved.
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Emissions-critical charge cooling using an organic rankine cycle
Ernst, Timothy C.; Nelson, Christopher R.
2014-07-15
The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.
Modeling the winter-to-summer transition of prokaryotic and viral abundance in the Arctic Ocean.
Winter, Christian; Payet, Jérôme P; Suttle, Curtis A
2012-01-01
One of the challenges in oceanography is to understand the influence of environmental factors on the abundances of prokaryotes and viruses. Generally, conventional statistical methods resolve trends well, but more complex relationships are difficult to explore. In such cases, Artificial Neural Networks (ANNs) offer an alternative way for data analysis. Here, we developed ANN-based models of prokaryotic and viral abundances in the Arctic Ocean. The models were used to identify the best predictors for prokaryotic and viral abundances including cytometrically-distinguishable populations of prokaryotes (high and low nucleic acid cells) and viruses (high- and low-fluorescent viruses) among salinity, temperature, depth, day length, and the concentration of Chlorophyll-a. The best performing ANNs to model the abundances of high and low nucleic acid cells used temperature and Chl-a as input parameters, while the abundances of high- and low-fluorescent viruses used depth, Chl-a, and day length as input parameters. Decreasing viral abundance with increasing depth and decreasing system productivity was captured well by the ANNs. Despite identifying the same predictors for the two populations of prokaryotes and viruses, respectively, the structure of the best performing ANNs differed between high and low nucleic acid cells and between high- and low-fluorescent viruses. Also, the two prokaryotic and viral groups responded differently to changes in the predictor parameters; hence, the cytometric distinction between these populations is ecologically relevant. The models imply that temperature is the main factor explaining most of the variation in the abundances of high nucleic acid cells and total prokaryotes and that the mechanisms governing the reaction to changes in the environment are distinctly different among the prokaryotic and viral populations.
Modeling the Winter–to–Summer Transition of Prokaryotic and Viral Abundance in the Arctic Ocean
Winter, Christian; Payet, Jérôme P.; Suttle, Curtis A.
2012-01-01
One of the challenges in oceanography is to understand the influence of environmental factors on the abundances of prokaryotes and viruses. Generally, conventional statistical methods resolve trends well, but more complex relationships are difficult to explore. In such cases, Artificial Neural Networks (ANNs) offer an alternative way for data analysis. Here, we developed ANN-based models of prokaryotic and viral abundances in the Arctic Ocean. The models were used to identify the best predictors for prokaryotic and viral abundances including cytometrically-distinguishable populations of prokaryotes (high and low nucleic acid cells) and viruses (high- and low-fluorescent viruses) among salinity, temperature, depth, day length, and the concentration of Chlorophyll-a. The best performing ANNs to model the abundances of high and low nucleic acid cells used temperature and Chl-a as input parameters, while the abundances of high- and low-fluorescent viruses used depth, Chl-a, and day length as input parameters. Decreasing viral abundance with increasing depth and decreasing system productivity was captured well by the ANNs. Despite identifying the same predictors for the two populations of prokaryotes and viruses, respectively, the structure of the best performing ANNs differed between high and low nucleic acid cells and between high- and low-fluorescent viruses. Also, the two prokaryotic and viral groups responded differently to changes in the predictor parameters; hence, the cytometric distinction between these populations is ecologically relevant. The models imply that temperature is the main factor explaining most of the variation in the abundances of high nucleic acid cells and total prokaryotes and that the mechanisms governing the reaction to changes in the environment are distinctly different among the prokaryotic and viral populations. PMID:23285186
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2016-12-01
Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Towards systematic evaluation of crop model outputs for global land-use models
NASA Astrophysics Data System (ADS)
Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr
2016-04-01
Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include iterative improvement of parameter assumptions and evaluation of implications of GGCM performances for intended use in the IIASA EPIC-GLOBIOM model cluster. Our approach helps targeting future efforts at improving GGCM accuracy and would achieve highest efficiency if combined with traditional field-scale evaluation and sensitivity analysis.
Master control data handling program uses automatic data input
NASA Technical Reports Server (NTRS)
Alliston, W.; Daniel, J.
1967-01-01
General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.
Significant wave heights from Sentinel-1 SAR: Validation and applications
NASA Astrophysics Data System (ADS)
Stopa, J. E.; Mouche, A.
2017-03-01
Two empirical algorithms are developed for wave mode images measured from the synthetic aperture radar aboard Sentinel-1 A. The first method, called CWAVE_S1A, is an extension of previous efforts developed for ERS2 and the second method, called Fnn, uses the azimuth cutoff among other parameters to estimate significant wave heights (Hs) and average wave periods without using a modulation transfer function. Neural networks are trained using colocated data generated from WAVEWATCH III and independently verified with data from altimeters and in situ buoys. We use neural networks to relate the nonlinear relationships between the input SAR image parameters and output geophysical wave parameters. CWAVE_S1A performs well and has reduced precision compared to Fnn with Hs root mean square errors within 0.5 and 0.6 m, respectively. The developed neural networks extend the SAR's ability to retrieve useful wave information under a large range of environmental conditions including extratropical and tropical cyclones in which Hs estimation is traditionally challenging.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Standardized input for Hanford environmental impact statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.
1981-05-01
Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less
Program for User-Friendly Management of Input and Output Data Sets
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard
2003-01-01
A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
VISIR-I: small vessels, least-time nautical routes using wave forecasts
NASA Astrophysics Data System (ADS)
Mannarini, G.; Pinardi, N.; Coppini, G.; Oddo, P.; Iafrati, A.
2015-09-01
A new numerical model for the on-demand computation of optimal ship routes based on sea-state forecasts has been developed. The model, named VISIR (discoVerIng Safe and effIcient Routes) is designed to support decision-makers when planning a marine voyage. The first version of the system, VISIR-I, considers medium and small motor vessels with lengths of up to a few tens of meters and a displacement hull. The model is made up of three components: the route optimization algorithm, the mechanical model of the ship, and the environmental fields. The optimization algorithm is based on a graph-search method with time-dependent edge weights. The algorithm is also able to compute a voluntary ship speed reduction. The ship model accounts for calm water and added wave resistance by making use of just the principal particulars of the vessel as input parameters. The system also checks the optimal route for parametric roll, pure loss of stability, and surfriding/broaching-to hazard conditions. Significant wave height, wave spectrum peak period, and wave direction forecast fields are employed as an input. Examples of VISIR-I routes in the Mediterranean Sea are provided. The optimal route may be longer in terms of miles sailed and yet it is faster and safer than the geodetic route between the same departure and arrival locations. Route diversions result from the safety constraints and the fact that the algorithm takes into account the full temporal evolution and spatial variability of the environmental fields.
Specialized Environmental Chamber Test Complex: User Test Planning Guide
NASA Technical Reports Server (NTRS)
Montz, Michael E.
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the Specialized Environmental Test Complex. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Fischer, M.; Kelley, A. M.; Ward, E. J.; ...
2017-02-03
Most research on bioenergy short rotation woody crops (SRWC) has been dedicated to the genera Populus and Salix. These species generally require relatively high-input culture, including intensive weed competition control, which increases costs and environmental externalities. Widespread native early successional species, characterized by high productivity and good coppicing ability, may be better adapted to local environmental stresses and therefore could offer alternative low-input bioenergy production systems. In order to test this concept, we established a three-year experiment comparing a widely-used hybrid poplar (Populus nigra × P. maximowiczii, clone ‘NM6’) to two native species, American sycamore (Platanus occidentalis L.) and tuliptreemore » (Liriodendron tulipifera L.) grown under contrasting weed and pest control at a coastal plain site in eastern North Carolina, USA. Mean cumulative aboveground wood production was significantly greater in sycamore, with yields of 46.6 Mg ha -11 under high-inputs and 32.7 Mg ha -1 under low-input culture, which rivaled the high-input NM6 yield of 32.9 Mg ha -1. NM6 under low-input management provided noncompetitive yield of 6.2 Mg ha -1. We also found that sycamore showed superiority in survival, biomass increment, weed resistance, treatment convergence, and within-stand uniformity. All are important characteristics for a bioenergy feedstock crop species, leading to reliable establishment and efficient biomass production. Poor performance in all traits was found for tuliptree, with a maximum yield of 1.2 Mg ha -1, suggesting this native species is a poor choice for SRWC. We then conclude that careful species selection beyond the conventionally used genera may enhance reliability and decrease negative environmental impacts of the bioenergy biomass production sector.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, M.; Kelley, A. M.; Ward, E. J.
Most research on bioenergy short rotation woody crops (SRWC) has been dedicated to the genera Populus and Salix. These species generally require relatively high-input culture, including intensive weed competition control, which increases costs and environmental externalities. Widespread native early successional species, characterized by high productivity and good coppicing ability, may be better adapted to local environmental stresses and therefore could offer alternative low-input bioenergy production systems. In order to test this concept, we established a three-year experiment comparing a widely-used hybrid poplar (Populus nigra × P. maximowiczii, clone ‘NM6’) to two native species, American sycamore (Platanus occidentalis L.) and tuliptreemore » (Liriodendron tulipifera L.) grown under contrasting weed and pest control at a coastal plain site in eastern North Carolina, USA. Mean cumulative aboveground wood production was significantly greater in sycamore, with yields of 46.6 Mg ha -11 under high-inputs and 32.7 Mg ha -1 under low-input culture, which rivaled the high-input NM6 yield of 32.9 Mg ha -1. NM6 under low-input management provided noncompetitive yield of 6.2 Mg ha -1. We also found that sycamore showed superiority in survival, biomass increment, weed resistance, treatment convergence, and within-stand uniformity. All are important characteristics for a bioenergy feedstock crop species, leading to reliable establishment and efficient biomass production. Poor performance in all traits was found for tuliptree, with a maximum yield of 1.2 Mg ha -1, suggesting this native species is a poor choice for SRWC. We then conclude that careful species selection beyond the conventionally used genera may enhance reliability and decrease negative environmental impacts of the bioenergy biomass production sector.« less
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
NASA Astrophysics Data System (ADS)
vellaichamy, Lakshmanan; Paulraj, Sathiya
2018-02-01
The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
AIRCRAFT REACTOR CONTROL SYSTEM APPLICABLE TO TURBOJET AND TURBOPROP POWER PLANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorker, G.E.
1955-07-19
Control systems proposed for direct cycle nuclear powered aircraft commonly involve control of engine speed, nuclear energy input, and chcmical energy input. A system in which these parameters are controlled by controlling the total energy input, the ratio of nuclear and chemical energy input, and the engine speed is proposed. The system is equally applicable to turbojet or turboprop applications. (auth)
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
NASA Astrophysics Data System (ADS)
Raimonet, Mélanie; Guillou, Gaël; Mornet, Françoise; Richard, Pierre
2013-03-01
Although nitrogen stable isotope ratio (δ15N) in macroalgae is widely used as a bioindicator of anthropogenic nitrogen inputs to the coastal zone, recent studies suggest the possible role of macroalgae metabolism in δ15N variability. Simultaneous determinations of δ15N of dissolved inorganic nitrogen (DIN) along the land-sea continuum, inter-species variability of δ15N and its sensitivity to environmental factors are necessary to confirm the efficiency of macroalgae δ15N in monitoring nitrogen origin in mixed-use watersheds. In this study, δ15N of annual and perennial macroalgae (Ulva sp., Enteromorpha sp., Fucus vesiculosus and Fucus serratus) are compared to δ15N-DIN along the Charente Estuary, after characterizing δ15N of the three main DIN sources (i.e. cultivated area, pasture, sewage treatment plant outlet). During late winter and spring, when human activities produce high DIN inputs, DIN sources exhibit distinct δ15N signals in nitrate (NO) and ammonium (NH): cultivated area (+6.5 ± 0.6‰ and +9.0 ± 11.0‰), pasture (+9.2 ± 1.8‰ and +12.4‰) and sewage treatment plant discharge (+16.9 ± 8.7‰ and +25.4 ± 5.9‰). While sources show distinct δN- in this multiple source catchment, the overall mixture of NO sources - generally >95% DIN - leads to low variations of δN-NO at the mouth of the estuary (+7.7 to +8.4‰). Even if estuarine δN-NO values are not significantly different from pristine continental and oceanic site (+7.3‰ and +7.4‰), macroalgae δ15N values are generally higher at the mouth of the estuary. This highlights high anthropogenic DIN inputs in the estuary, and enhanced contribution of 15N-depleted NH in oceanic waters. Although seasonal variations in δN-NO are low, the same temporal trends in macroalgae δ15N values at estuarine and oceanic sites, and inter-species differences in δ15N values, suggest that macroalgae δ15N values might be modified by the metabolic response of macroalgae to environmental parameters (e.g., temperature, light, DIN concentrations). Differences between annual and perennial macroalgae indicate both a higher integration time of perennial compared to annual macroalgae and the possible role of passive versus active uptake mechanisms. Further studies are required to characterize the sensitivity of macroalgae fractionation to variable environmental conditions and uptake mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...
2016-01-06
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction
NASA Astrophysics Data System (ADS)
Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert
2017-04-01
We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.
Discrete event simulation for exploring strategies: an urban water management case.
Huang, Dong-Bin; Scholz, Roland W; Gujer, Willi; Chitwood, Derek E; Loukopoulos, Peter; Schertenleib, Roland; Siegrist, Hansruedi
2007-02-01
This paper presents a model structure aimed at offering an overview of the various elements of a strategy and exploring their multidimensional effects through time in an efficient way. It treats a strategy as a set of discrete events planned to achieve a certain strategic goal and develops a new form of causal networks as an interfacing component between decision makers and environment models, e.g., life cycle inventory and material flow models. The causal network receives a strategic plan as input in a discrete manner and then outputs the updated parameter sets to the subsequent environmental models. Accordingly, the potential dynamic evolution of environmental systems caused by various strategies can be stepwise simulated. It enables a way to incorporate discontinuous change in models for environmental strategy analysis, and enhances the interpretability and extendibility of a complex model by its cellular constructs. It is exemplified using an urban water management case in Kunming, a major city in Southwest China. By utilizing the presented method, the case study modeled the cross-scale interdependencies of the urban drainage system and regional water balance systems, and evaluated the effectiveness of various strategies for improving the situation of Dianchi Lake.
Energy and emergy analysis of mixed crop-livestock farming
NASA Astrophysics Data System (ADS)
Kuczuk, Anna; Pospolita, Janusz; Wacław, Stefan
2017-10-01
This paper contains substance and energy balances of mixed crop-livestock farming. The analysis involves the period between 2012 and 2015. The structure of the presentation in the paper includes: crops and their structure, details of the use of plants with a beneficial effect on soil and stocking density per 1ha of agricultural land. Cumulative energy intensity of agricultural animal and plant production was determined, which is coupled the discussion of the energy input in the production of a grain unit obtained from plant and animal production. This data was compared with the data from the literature containing examples derived from intensive and organic production systems. The environmental impact of a farm was performed on the basis of emergy analysis. Emergy fluxes were determined on the basis of renewable and non-renewable sources. As a consequence, several performance indicators were established: Emergy Yield Ratio EYR, Environmental Loading Ratio ELR and ratio of emergy from renewable sources R! . Their values were compared with the parameters characterizing other production patterns followed in agricultural production. As a consequence, conclusions were derived, in particular the ones concerning environmental sustainability of production systems in the analyzed farm.
Acoustic environmental accuracy requirements for response determination
NASA Technical Reports Server (NTRS)
Pettitt, M. R.
1983-01-01
A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.
Fiedler, Thomas M; Ladd, Mark E; Bitz, Andreas K
2017-01-01
The purpose of this work was to perform an RF safety evaluation for a bilateral four-channel transmit/receive breast coil and to determine the maximum permissible input power for which RF exposure of the subject stays within recommended limits. The safety evaluation was done based on SAR as well as on temperature simulations. In comparison to SAR, temperature is more directly correlated with tissue damage, which allows a more precise safety assessment. The temperature simulations were performed by applying three different blood perfusion models as well as two different ambient temperatures. The goal was to evaluate whether the SAR and temperature distributions correlate inside the human body and whether SAR or temperature is more conservative with respect to the limits specified by the IEC. A simulation model was constructed including coil housing and MR environment. Lumped elements and feed networks were modeled by a network co-simulation. The model was validated by comparison of S-parameters and B 1 + maps obtained in an anatomical phantom. Three numerical body models were generated based on 3 Tesla MRI images to conform to the coil housing. SAR calculations were performed and the maximal permissible input power was calculated based on IEC guidelines. Temperature simulations were performed based on the Pennes bioheat equation with the power absorption from the RF simulations as heat source. The blood perfusion was modeled as constant to reflect impaired patients as well as with a linear and exponential temperature-dependent increase to reflect two possible models for healthy subjects. Two ambient temperatures were considered to account for cooling effects from the environment. The simulation model was validated with a mean deviation of 3% between measurement and simulation results. The highest 10 g-averaged SAR was found in lung and muscle tissue on the right side of the upper torso. The maximum permissible input power was calculated to be 17 W. The temperature simulations showed that temperature maximums do not correlate well with the position of the SAR maximums in all considered cases. The body models with an exponential blood perfusion increase did not exceed the temperature limit when an RF power according to the SAR limit was applied; in this case, a higher input power level by up to 73% would be allowed. The models with a constant or linear perfusion exceeded the limit for the local temperature when the local SAR limit was adhered to and would require a decrease in the input power level by up to 62%. The maximum permissible input power was determined based on SAR simulations with three newly generated body models and compared with results from temperature simulations. While SAR calculations are state-of-the-art and well defined as they are based on more or less well-known material parameters, temperature simulations depend strongly on additional material, environmental and physiological parameters. The simulations demonstrated that more consideration needs be made by the MR community in defining the parameters for temperature simulations in order to apply temperature limits instead of SAR limits in the context of MR RF safety evaluations. © 2016 American Association of Physicists in Medicine.
Effect of Heat Input on Geometry of Austenitic Stainless Steel Weld Bead on Low Carbon Steel
NASA Astrophysics Data System (ADS)
Saha, Manas Kumar; Hazra, Ritesh; Mondal, Ajit; Das, Santanu
2018-05-01
Among different weld cladding processes, gas metal arc welding (GMAW) cladding becomes a cost effective, user friendly, versatile method for protecting the surface of relatively lower grade structural steels from corrosion and/or erosion wear by depositing high grade stainless steels onto them. The quality of cladding largely depends upon the bead geometry of the weldment deposited. Weld bead geometry parameters, like bead width, reinforcement height, depth of penetration, and ratios like reinforcement form factor (RFF) and penetration shape factor (PSF) determine the quality of the weld bead geometry. Various process parameters of gas metal arc welding like heat input, current, voltage, arc travel speed, mode of metal transfer, etc. influence formation of bead geometry. In the current experimental investigation, austenite stainless steel (316) weld beads are formed on low alloy structural steel (E350) by GMAW using 100% CO2 as the shielding gas. Different combinations of current, voltage and arc travel speed are chosen so that heat input increases from 0.35 to 0.75 kJ/mm. Nine number of weld beads are deposited and replicated twice. The observations show that weld bead width increases linearly with increase in heat input, whereas reinforcement height and depth of penetration do not increase with increase in heat input. Regression analysis is done to establish the relationship between heat input and different geometrical parameters of weld bead. The regression models developed agrees well with the experimental data. Within the domain of the present experiment, it is observed that at higher heat input, the weld bead gets wider having little change in penetration and reinforcement; therefore, higher heat input may be recommended for austenitic stainless steel cladding on low alloy steel.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
NASA Astrophysics Data System (ADS)
Prescott, Aaron M.; Abel, Steven M.
2016-12-01
The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.
Berre, David; Blancard, Stéphane; Boussemart, Jean-Philippe; Leleu, Hervé; Tillard, Emmanuel
2014-12-15
This study focused on the trade-off between milk production and its environmental impact on greenhouse gas (GHG) emissions and nitrogen surplus in a high input tropical system. We first identified the objectives of the three main stakeholders in the dairy sector (farmers, a milk cooperative and environmentalists). The main aim of the farmers and cooperative's scenarios was to increase milk production without additional environmental deterioration but with the possibility of increasing the inputs for the cooperative. The environmentalist's objective was to reduce environmental deterioration. Second, we designed a sustainable intensification scenario combining maximization of milk production and minimization of environmental impacts. Third, the objectives for reducing the eco-inefficiency of dairy systems in Reunion Island were incorporated in a framework for activity analysis, which was used to model a technological approach with desirable and undesirable outputs. Of the four scenarios, the sustainable intensification scenario produced the best results, with a potential decrease of 238 g CO2-e per liter of milk (i.e. a reduction of 13.93% compared to the current level) and a potential 7.72 L increase in milk produced for each kg of nitrogen surplus (i.e. an increase of 16.45% compared to the current level). These results were based on the best practices observed in Reunion Island and optimized manure management, crop-livestock interactions, and production processes. Our results also showed that frontier efficiency analysis can shed new light on the challenge of developing sustainable intensification in high input tropical dairy systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Eberts, S.M.; Böhlke, J.K.; Kauffman, L.J.; Jurgens, B.C.
2012-01-01
Environmental age tracers have been used in various ways to help assess vulnerability of drinking-water production wells to contamination. The most appropriate approach will depend on the information that is available and that which is desired. To understand how the well will respond to changing nonpoint-source contaminant inputs at the water table, some representation of the distribution of groundwater ages in the well is needed. Such information for production wells is sparse and difficult to obtain, especially in areas lacking detailed field studies. In this study, age distributions derived from detailed groundwater-flow models with advective particle tracking were compared with those generated from lumped-parameter models to examine conditions in which estimates from simpler, less resource-intensive lumped-parameter models could be used in place of estimates from particle-tracking models. In each of four contrasting hydrogeologic settings in the USA, particle-tracking and lumped-parameter models yielded roughly similar age distributions and largely indistinguishable contaminant trends when based on similar conceptual models and calibrated to similar tracer data. Although model calibrations and predictions were variably affected by tracer limitations and conceptual ambiguities, results illustrated the importance of full age distributions, rather than apparent tracer ages or model mean ages, for trend analysis and forecasting.
Kumblad, L; Kautsky, U; Naeslund, B
2006-01-01
In safety assessments of nuclear facilities, a wide range of radioactive isotopes and their potential hazard to a large assortment of organisms and ecosystem types over long time scales need to be considered. Models used for these purposes have typically employed approaches based on generic reference organisms, stylised environments and transfer functions for biological uptake exclusively based on bioconcentration factors (BCFs). These models are of non-mechanistic nature and involve no understanding of uptake and transport processes in the environment, which is a severe limitation when assessing real ecosystems. In this paper, ecosystem models are suggested as a method to include site-specific data and to facilitate the modelling of dynamic systems. An aquatic ecosystem model for the environmental transport of radionuclides is presented and discussed. With this model, driven and constrained by site-specific carbon dynamics and three radionuclide specific mechanisms: (i) radionuclide uptake by plants, (ii) excretion by animals, and (iii) adsorption to organic surfaces, it was possible to estimate the radionuclide concentrations in all components of the modelled ecosystem with only two radionuclide specific input parameters (BCF for plants and Kd). The importance of radionuclide specific mechanisms for the exposure to organisms was examined, and probabilistic and sensitivity analyses to assess the uncertainties related to ecosystem input parameters were performed. Verification of the model suggests that this model produces analogous results to empirically derived data for more than 20 different radionuclides.
Azorin-Lopez, Jorge; Fuster-Guillo, Andres; Saval-Calvo, Marcelo; Mora-Mora, Higinio; Garcia-Chamizo, Juan Manuel
2017-01-01
The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces. PMID:28640211
Jo, ByungWan
2018-01-01
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality. PMID:29561777
Jo, ByungWan; Khan, Rana Muhammad Asad
2018-03-21
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH₄, CO, SO₂, and H₂S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R ² and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.
Ecological Risk Assessment and Implications for Environmental Management
Environmental risk assessment was developed to provide scientific input to environmental decision making. Managers need to know the conditions in the environment, the causes of those conditions, the risks and benefits or alternative actions, and the actual outcomes of the impleme...
Transport spatial model for the definition of green routes for city logistics centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pamučar, Dragan, E-mail: dpamucar@gmail.com; Gigović, Ljubomir, E-mail: gigoviclj@gmail.com; Ćirović, Goran, E-mail: cirovic@sezampro.rs
This paper presents a transport spatial decision support model (TSDSM) for carrying out the optimization of green routes for city logistics centers. The TSDSM model is based on the integration of the multi-criteria method of Weighted Linear Combination (WLC) and the modified Dijkstra algorithm within a geographic information system (GIS). The GIS is used for processing spatial data. The proposed model makes it possible to plan routes for green vehicles and maximize the positive effects on the environment, which can be seen in the reduction of harmful gas emissions and an increase in the air quality in highly populated areas.more » The scheduling of delivery vehicles is given as a problem of optimization in terms of the parameters of: the environment, health, use of space and logistics operating costs. Each of these input parameters was thoroughly examined and broken down in the GIS into criteria which further describe them. The model presented here takes into account the fact that logistics operators have a limited number of environmentally friendly (green) vehicles available. The TSDSM was tested on a network of roads with 127 links for the delivery of goods from the city logistics center to the user. The model supports any number of available environmentally friendly or environmentally unfriendly vehicles consistent with the size of the network and the transportation requirements. - Highlights: • Model for routing light delivery vehicles in urban areas. • Optimization of green routes for city logistics centers. • The proposed model maximizes the positive effects on the environment. • The model was tested on a real network.« less
Assessing the sustainability of egg production systems in The Netherlands.
van Asselt, E D; van Bussel, L G J; van Horne, P; van der Voet, H; van der Heijden, G W A M; van der Fels-Klerx, H J
2015-08-01
Housing systems for laying hens have changed over the years due to increased public concern regarding animal welfare. In terms of sustainability, animal welfare is just one aspect that needs to be considered. Social aspects as well as environmental and economic factors need to be included as well. In this study, we assessed the sustainability of enriched cage, barn, free-range, and organic egg production systems following a predefined protocol. Indicators were selected within the social, environmental, and economic dimensions, after which parameter values and sustainability limits were set for the core indicators in order to quantify sustainability. Uncertainty in the parameter values as well as assigned weights and compensabilities of the indicators influenced the outcome of the sustainability assessment. Using equal weights for the indicators showed that, for the Dutch situation, enriched cage egg production was most sustainable, having the highest score on the environmental dimension, whereas free-range egg production gave the highest score in the social dimension (covering food safety, animal welfare, and human welfare). In the economic dimension both enriched cage egg and organic egg production had the highest sustainability score. When weights were attributed according to stakeholder outputs, individual differences were seen, but the overall scores were comparable to the sustainability scores based on equal weights. The provided method enabled a quantification of sustainability using input from stakeholders to include societal preferences in the overall assessment. Allowing for different weights and compensabilities helps policymakers in communicating with stakeholders involved and provides a weighted decision regarding future housing systems for laying hens. © 2015 Poultry Science Association Inc.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
The quality estimation of exterior wall’s and window filling’s construction design
NASA Astrophysics Data System (ADS)
Saltykov, Ivan; Bovsunovskaya, Maria
2017-10-01
The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.
NASA Astrophysics Data System (ADS)
Daneji, A.; Ali, M.; Pervaiz, S.
2018-04-01
Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.
40 CFR 75.82 - Monitoring of Hg mass emissions and heat input at common and multiple stacks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... heat input at common and multiple stacks. 75.82 Section 75.82 Protection of Environment ENVIRONMENTAL... Provisions § 75.82 Monitoring of Hg mass emissions and heat input at common and multiple stacks. (a) Unit... systems and perform the Hg emission testing described under § 75.81(b). If reporting of the unit heat...
The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vercesi, J.; Surian, E.
Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less
NASA Technical Reports Server (NTRS)
Cross, P. L.
1994-01-01
Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Chasin, Marshall; Russo, Frank A
2004-01-01
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
Determination of nitrogen balance in agroecosystems
USDA-ARS?s Scientific Manuscript database
Nitrogen balance in agroecosystems provides a quantitative framework of N inputs and outputs and retention in the soil that examine sustainability of agricultural productivity and soil and environmental quality. Nitrogen inputs include N additions from manures and fertilizers, atmospheric deposition...
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
USEEIO: a New and Transparent United States ...
National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Femtosecond soliton source with fast and broad spectral tunability.
Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E
2009-03-15
We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
FAST: Fitting and Assessment of Synthetic Templates
NASA Astrophysics Data System (ADS)
Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis
2018-03-01
FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
USEEIO: a New and Transparent United States Environmentally Extended Input-Output Model
National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts ...
Leurer, Klaus C; Brown, Colin
2008-04-01
This paper presents a model of acoustic wave propagation in unconsolidated marine sediment, including compaction, using a concept of a simplified sediment structure, modeled as a binary grain-size sphere pack. Compressional- and shear-wave velocities and attenuation follow from a combination of Biot's model, used as the general framework, and two viscoelastic extensions resulting in complex grain and frame moduli, respectively. An effective-grain model accounts for the viscoelasticity arising from local fluid flow in expandable clay minerals in clay-bearing sediments. A viscoelastic-contact model describes local fluid flow at the grain contacts. Porosity, density, and the structural Biot parameters (permeability, pore size, structure factor) as a function of pressure follow from the binary model, so that the remaining input parameters to the acoustic model consist solely of the mass fractions and the known mechanical properties of each constituent (e.g., carbonates, sand, clay, and expandable clay) of the sediment, effective pressure, or depth, and the environmental parameters (water depth, salinity, temperature). Velocity and attenuation as a function of pressure from the model are in good agreement with data on coarse- and fine-grained unconsolidated marine sediments.
Alsop, Eric B; Boyd, Eric S; Raymond, Jason
2014-05-28
The metabolic strategies employed by microbes inhabiting natural systems are, in large part, dictated by the physical and geochemical properties of the environment. This study sheds light onto the complex relationship between biology and environmental geochemistry using forty-three metagenomes collected from geochemically diverse and globally distributed natural systems. It is widely hypothesized that many uncommonly measured geochemical parameters affect community dynamics and this study leverages the development and application of multidimensional biogeochemical metrics to study correlations between geochemistry and microbial ecology. Analysis techniques such as a Markov cluster-based measure of the evolutionary distance between whole communities and a principal component analysis (PCA) of the geochemical gradients between environments allows for the determination of correlations between microbial community dynamics and environmental geochemistry and provides insight into which geochemical parameters most strongly influence microbial biodiversity. By progressively building from samples taken along well defined geochemical gradients to samples widely dispersed in geochemical space this study reveals strong links between the extent of taxonomic and functional diversification of resident communities and environmental geochemistry and reveals temperature and pH as the primary factors that have shaped the evolution of these communities. Moreover, the inclusion of extensive geochemical data into analyses reveals new links between geochemical parameters (e.g. oxygen and trace element availability) and the distribution and taxonomic diversification of communities at the functional level. Further, an overall geochemical gradient (from multivariate analyses) between natural systems provides one of the most complete predictions of microbial taxonomic and functional composition. Clustering based on the frequency in which orthologous proteins occur among metagenomes facilitated accurate prediction of the ordering of community functional composition along geochemical gradients, despite a lack of geochemical input. The consistency in the results obtained from the application of Markov clustering and multivariate methods to distinct natural systems underscore their utility in predicting the functional potential of microbial communities within a natural system based on system geochemistry alone, allowing geochemical measurements to be used to predict purely biological metrics such as microbial community composition and metabolism.
Crops Models for Varying Environmental Conditions
NASA Technical Reports Server (NTRS)
Jones, Harry; Cavazzoni, James; Keas, Paul
2001-01-01
New variable environment Modified Energy Cascade (MEC) crop models were developed for all the Advanced Life Support (ALS) candidate crops and implemented in SIMULINK. The MEC models are based on the Volk, Bugbee, and Wheeler Energy Cascade (EC) model and are derived from more recent Top-Level Energy Cascade (TLEC) models. The MEC models simulate crop plant responses to day-to-day changes in photosynthetic photon flux, photoperiod, carbon dioxide level, temperature, and relative humidity. The original EC model allows changes in light energy but uses a less accurate linear approximation. The simulation outputs of the new MEC models for constant nominal environmental conditions are very similar to those of earlier EC models that use parameters produced by the TLEC models. There are a few differences. The new MEC models allow setting the time for seed emergence, have realistic exponential canopy growth, and have corrected harvest dates for potato and tomato. The new MEC models indicate that the maximum edible biomass per meter squared per day is produced at the maximum allowed carbon dioxide level, the nominal temperatures, and the maximum light input. Reducing the carbon dioxide level from the maximum to the minimum allowed in the model reduces crop production significantly. Increasing temperature decreases production more than it decreases the time to harvest, so productivity in edible biomass per meter squared per day is greater at nominal than maximum temperatures, The productivity in edible biomass per meter squared per day is greatest at the maximum light energy input allowed in the model, but the edible biomass produced per light energy input unit is lower than at nominal light levels. Reducing light levels increases light and power use efficiency. The MEC models suggest we can adjust the light energy day-to- day to accommodate power shortages or Lise excess power while monitoring and controlling edible biomass production.
Ecophysiology of Crassulacean Acid Metabolism (CAM)
LÜTTGE, ULRICH
2004-01-01
• Background and Scope Crassulacean Acid Metabolism (CAM) as an ecophysiological modification of photosynthetic carbon acquisition has been reviewed extensively before. Cell biology, enzymology and the flow of carbon along various pathways and through various cellular compartments have been well documented and discussed. The present attempt at reviewing CAM once again tries to use a different approach, considering a wide range of inputs, receivers and outputs. • Input Input is given by a network of environmental parameters. Six major ones, CO2, H2O, light, temperature, nutrients and salinity, are considered in detail, which allows discussion of the effects of these factors, and combinations thereof, at the individual plant level (‘physiological aut‐ecology’). • Receivers Receivers of the environmental cues are the plant types genotypes and phenotypes, the latter including morphotypes and physiotypes. CAM genotypes largely remain ‘black boxes’, and research endeavours of genomics, producing mutants and following molecular phylogeny, are just beginning. There is no special development of CAM morphotypes except for a strong tendency for leaf or stem succulence with large cells with big vacuoles and often, but not always, special water storage tissues. Various CAM physiotypes with differing degrees of CAM expression are well characterized. • Output Output is the shaping of habitats, ecosystems and communities by CAM. A number of systems are briefly surveyed, namely aquatic systems, deserts, salinas, savannas, restingas, various types of forests, inselbergs and paramós. • Conclusions While quantitative census data for CAM diversity and biomass are largely missing, intuition suggests that the larger CAM domains are those systems which are governed by a network of interacting stress factors requiring versatile responses and not systems where a single stress factor strongly prevails. CAM is noted to be a strategy for variable, flexible and plastic niche occupation rather than lush productivity. ‘Physiological syn‐ecology’ reveals that phenotypic plasticity constitutes the ecophysiological advantage of CAM. PMID:15150072
Evaluation of trade influence on economic growth rate by computational intelligence approach
NASA Astrophysics Data System (ADS)
Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor
2017-01-01
In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Sensitivity of drainage efficiency of cranberry fields to edaphic conditions
NASA Astrophysics Data System (ADS)
Periard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean; Hallema, Dennis W.
2014-05-01
Water management on a cranberry farm requires intelligent irrigation and drainage strategies to sustain strong productivity and minimize environmental impact. For example, to avoid propagation of disease and meet evapotranspiration demand, it is imperative to maintain optimal moisture conditions in the root zone, which depends on an efficient drainage system. However, several drainage problems have been identified in cranberry fields. Most of these drainage problems are due to the presence of a restrictive layer in the soil profile (Gumiere et al., 2014). The objective of this work is to evaluate the effects of a restrictive layer on the drainage efficiency by the bias of a multi-local sensitivity analysis. We have tested the sensitivity of the drainage efficiency to different input parameters set of soil hydraulic properties, geometrical parameters and climatic conditions. Soil water flux dynamic for every input parameters set was simulated with finite element model Hydrus 1D (Simanek et al., 2008). Multi-local sensitivity was calculated with the Gâteaux directional derivatives with the procedure described by Cheviron et al. (2010). Results indicate that drainage efficiency is more sensitive to soil hydraulic properties than geometrical parameters and climatic conditions. Then, the geometrical parameters of the depth are more sensitive than the thickness. The drainage efficiency was very insensitive to the climatic conditions. Understanding the sensitivity of drainage efficiency according to soil hydraulic properties, geometrical and climatic conditions are essential for diagnosis drainage problems. However, it becomes important to identify the mechanisms involved in the genesis of anthropogenic soils cranberry to identify conditions that may lead to the formation of a restrictive layer. References: Cheviron, B., S.J. Gumiere, Y. Le Bissonnais, R. Moussa and D. Raclot. 2010. Sensitivity analysis of distributed erosion models: Framework. Water Resources Research 46: W08508. doi:10.1029/2009WR007950. Gumiere, S.J., J. Lafond, D. W. Hallema, Y. Périard, J. Caron et J. Gallichand. 2014. Mapping soil hydraulic conductivity and matric potential for water management of cranberry: Characterization and spatial interpolation methods. Biosystems Engineering.
Biogenic Emission Inventories: Scaling Local Biogenic Measurements to Regions
NASA Astrophysics Data System (ADS)
Lamb, B.; Pressley, S.; Westberg, H.; Guenther, A.
2002-12-01
Biogenic Hydrocarbons, such as isoprene, are important trace gas species that are naturally emitted by vegetation and that affect the oxidative capacity of the atmosphere. Biogenic emissions are regulated by many environmental variables; the most important variables are thought to be temperature and light. Long-term isoprene flux measurements are useful for verifying existing canopy models and exploring other correlations between isoprene fluxes and environmental parameters. Biogenic Emission Models, such as BEIS (Biogenic Emission Inventory System) rely on above canopy environmental parameters and below canopy scaling factors to estimate canopy scale biogenic hydrocarbon fluxes. Other models, which are more complex, are coupled micrometeorological and physiological modules that provide feedback mechanisms present in a canopy environment. These types of models can predict biogenic emissions well, however, the required input is extensive, and for regional applications, they can be cumbersome. This paper presents analyses based on long-term isoprene flux measurements that have been collected since 1999 at the AmeriFlux site located at the University of Michigan Biological Station (UMBS) as part of the Program for Research on Oxidants: PHotochemistry, Emissions, and Transport (PROPHET). The goals of this research were to explore a potential relationship between the surface energy budget (primarily sensible heat flux) and isoprene emissions. Our hypothesis is that the surface energy flux is a better model parameter for isoprene emissions at the canopy scale than temperature and light levels, and the link to the surface energy budget will provide a significant improvement in isoprene emission models. Preliminary results indicate a significant correlation between daily isoprene emissions and sensible heat fluxes for a predominantly aspen/oak stand located in northern Michigan. Since surface energy budgets are an integral part of mesoscale meteorological models, this could potentially be a useful tool for including biogenic emissions into regional atmospheric models. Comparison of measured isoprene fluxes with current BEIS estimates will also be shown as an example of where emission inventories currently stand.
SLS-SPEC-159 Cross-Program Design Specification for Natural Environments (DSNE) Revision E
NASA Technical Reports Server (NTRS)
Roberts, Barry C.
2017-01-01
The DSNE completes environment-related specifications for architecture, system-level, and lower-tier documents by specifying the ranges of environmental conditions that must be accounted for by NASA ESD Programs. To assure clarity and consistency, and to prevent requirements documents from becoming cluttered with extensive amounts of technical material, natural environment specifications have been compiled into this document. The intent is to keep a unified specification for natural environments that each Program calls out for appropriate application. This document defines the natural environments parameter limits (maximum and minimum values, energy spectra, or precise model inputs, assumptions, model options, etc.), for all ESD Programs. These environments are developed by the NASA Marshall Space Flight Center (MSFC) Natural Environments Branch (MSFC organization code: EV44). Many of the parameter limits are based on experience with previous programs, such as the Space Shuttle Program. The parameter limits contain no margin and are meant to be evaluated individually to ensure they are reasonable (i.e., do not apply unrealistic extreme-on-extreme conditions). The natural environments specifications in this document should be accounted for by robust design of the flight vehicle and support systems. However, it is understood that in some cases the Programs will find it more effective to account for portions of the environment ranges by operational mitigation or acceptance of risk in accordance with an appropriate program risk management plan and/or hazard analysis process. The DSNE is not intended as a definition of operational models or operational constraints, nor is it adequate, alone, for ground facilities which may have additional requirements (for example, building codes and local environmental constraints). "Natural environments," as the term is used here, refers to the environments that are not the result of intended human activity or intervention. It consists of a variety of external environmental factors (most of natural origin and a few of human origin) which impose restrictions or otherwise impact the development or operation of flight vehicles and destination surface systems.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces
NASA Astrophysics Data System (ADS)
Rinker, Jennifer M.
2016-09-01
This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.
Meeting the challenge of food and energy security.
Karp, Angela; Richter, Goetz M
2011-06-01
Growing crops for bioenergy or biofuels is increasingly viewed as conflicting with food production. However, energy use continues to rise and food production requires fuel inputs, which have increased with intensification. Focussing on the question of food or fuel is thus not helpful. The bigger, more pertinent, challenge is how the increasing demands for food and energy can be met in the future, particularly when water and land availability will be limited. Energy crop production systems differ greatly in environmental impact. The use of high-input food crops for liquid transport fuels (first-generation biofuels) needs to be phased out and replaced by the use of crop residues and low-input perennial crops (second/advanced-generation biofuels) with multiple environmental benefits. More research effort is needed to improve yields of biomass crops grown on lower grade land, and maximum value should be extracted through the exploitation of co-products and integrated biorefinery systems. Policy must continually emphasize the changes needed and tie incentives to improved greenhous gas reduction and environmental performance of biofuels.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
Groundwater vulnerability to pollution mapping of Ranchi district using GIS
NASA Astrophysics Data System (ADS)
Krishna, R.; Iqbal, J.; Gorai, A. K.; Pathak, G.; Tuluri, F.; Tchounwou, P. B.
2015-12-01
Groundwater pollution due to anthropogenic activities is one of the major environmental problems in urban and industrial areas. The present study demonstrates the integrated approach with GIS and DRASTIC model to derive a groundwater vulnerability to pollution map. The model considers the seven hydrogeological factors [Depth to water table ( D), net recharge ( R), aquifer media ( A), soil media ( S), topography or slope ( T), impact of vadose zone ( I) and hydraulic Conductivity( C)] for generating the groundwater vulnerability to pollution map. The model was applied for assessing the groundwater vulnerability to pollution in Ranchi district, Jharkhand, India. The model was validated by comparing the model output (vulnerability indices) with the observed nitrate concentrations in groundwater in the study area. The reason behind the selection of nitrate is that the major sources of nitrate in groundwater are anthropogenic in nature. Groundwater samples were collected from 30 wells/tube wells distributed in the study area. The samples were analyzed in the laboratory for measuring the nitrate concentrations in groundwater. A sensitivity analysis of the integrated model was performed to evaluate the influence of single parameters on groundwater vulnerability index. New weights were computed for each input parameters to understand the influence of individual hydrogeological factors in vulnerability indices in the study area. Aquifer vulnerability maps generated in this study can be used for environmental planning and groundwater management.
Groundwater vulnerability to pollution mapping of Ranchi district using GIS.
Krishna, R; Iqbal, J; Gorai, A K; Pathak, G; Tuluri, F; Tchounwou, P B
2015-12-01
Groundwater pollution due to anthropogenic activities is one of the major environmental problems in urban and industrial areas. The present study demonstrates the integrated approach with GIS and DRASTIC model to derive a groundwater vulnerability to pollution map. The model considers the seven hydrogeological factors [Depth to water table ( D ), net recharge ( R ), aquifer media ( A ), soil media ( S ), topography or slope ( T ), impact of vadose zone ( I ) and hydraulic Conductivity( C )] for generating the groundwater vulnerability to pollution map. The model was applied for assessing the groundwater vulnerability to pollution in Ranchi district, Jharkhand, India. The model was validated by comparing the model output (vulnerability indices) with the observed nitrate concentrations in groundwater in the study area. The reason behind the selection of nitrate is that the major sources of nitrate in groundwater are anthropogenic in nature. Groundwater samples were collected from 30 wells/tube wells distributed in the study area. The samples were analyzed in the laboratory for measuring the nitrate concentrations in groundwater. A sensitivity analysis of the integrated model was performed to evaluate the influence of single parameters on groundwater vulnerability index. New weights were computed for each input parameters to understand the influence of individual hydrogeological factors in vulnerability indices in the study area. Aquifer vulnerability maps generated in this study can be used for environmental planning and groundwater management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Wei; Chen, Gaoqiang; Chen, Jian
Reduced-activation ferritic/martensitic (RAFM) steels are an important class of structural materials for fusion reactor internals developed in recent years because of their improved irradiation resistance. However, they can suffer from welding induced property degradations. In this paper, a solid phase joining technology friction stir welding (FSW) was adopted to join a RAFM steel Eurofer 97 and different FSW parameters/heat input were chosen to produce welds. FSW response parameters, joint microstructures and microhardness were investigated to reveal relationships among welding heat input, weld structure characterization and mechanical properties. In general, FSW heat input results in high hardness inside the stir zonemore » mostly due to a martensitic transformation. It is possible to produce friction stir welds similar to but not with exactly the same base metal hardness when using low power input because of other hardening mechanisms. Further, post weld heat treatment (PWHT) is a very effective way to reduce FSW stir zone hardness values.« less
NASA Astrophysics Data System (ADS)
Haller, Julian; Wilkens, Volker
2012-11-01
For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Gaussian beam profile shaping apparatus, method therefor and evaluation thereof
Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.
1999-01-01
A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.
Gaussian beam profile shaping apparatus, method therefore and evaluation thereof
Dickey, F.M.; Holswade, S.C.; Romero, L.A.
1999-01-26
A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS
In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...
NASA Astrophysics Data System (ADS)
McCallum, James L.; Engdahl, Nicholas B.; Ginn, Timothy R.; Cook, Peter. G.
2014-03-01
Residence time distributions (RTDs) have been used extensively for quantifying flow and transport in subsurface hydrology. In geochemical approaches, environmental tracer concentrations are used in conjunction with simple lumped parameter models (LPMs). Conversely, numerical simulation techniques require large amounts of parameterization and estimated RTDs are certainly limited by associated uncertainties. In this study, we apply a nonparametric deconvolution approach to estimate RTDs using environmental tracer concentrations. The model is based only on the assumption that flow is steady enough that the observed concentrations are well approximated by linear superposition of the input concentrations with the RTD; that is, the convolution integral holds. Even with large amounts of environmental tracer concentration data, the entire shape of an RTD remains highly nonunique. However, accurate estimates of mean ages and in some cases prediction of young portions of the RTD may be possible. The most useful type of data was found to be the use of a time series of tritium. This was due to the sharp variations in atmospheric concentrations and a short half-life. Conversely, the use of CFC compounds with smoothly varying atmospheric concentrations was more prone to nonuniqueness. This work highlights the benefits and limitations of using environmental tracer data to estimate whole RTDs with either LPMs or through numerical simulation. However, the ability of the nonparametric approach developed here to correct for mixing biases in mean ages appears promising.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel
2017-06-01
To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.
Environmental Alterations of Epigenetics Prior to the Birth
Lo, Chiao-Ling; Zhou, Feng C.
2014-01-01
The etiology of many brain diseases remains allusive to date after intensive investigation of genomic background and symptomatology from the day of birth. Emerging evidences indicate that a third factor, epigenetics prior to the birth, can exert profound influence on the development and functioning of the brain and over many neurodevelopmental syndromes. This chapter reviews how aversive environmental exposure to parents might predispose or increase vulnerability of offspring to neurodevelopmental deficit through alteration of epigenetics. These epigenetic altering environmental factors will be discussed in the category of addictive agents, nutrition or diet, prescriptive medicine, environmental pollutant, and stress. Epigenetic alterations induced by these aversive environmental factors cover all aspects of epigenetics including DNA methylation, histone modification, non-coding RNA, and chromatin modification. Next, the mechanisms how these environmental inputs influence epigenetics will be discussed. Finally, how environmentally altered epigenetic marks affect neurodevelopment is exemplified by the alcohol-induced fetal alcohol syndrome. It is hoped that a thorough understanding of the nature of prenatal epigenetic inputs will enable researchers with a clear vision to better unravel neurodevelopmental deficit, late onset neuropsychiatric diseases, or idiosyncratic mental disorders. PMID:25131541
Environmental alterations of epigenetics prior to the birth.
Lo, Chiao-Ling; Zhou, Feng C
2014-01-01
The etiology of many brain diseases remains allusive to date after intensive investigation of genomic background and symptomatology from the day of birth. Emerging evidences indicate that a third factor, epigenetics prior to the birth, can exert profound influence on the development and functioning of the brain and over many neurodevelopmental syndromes. This chapter reviews how aversive environmental exposure to parents might predispose or increase vulnerability of offspring to neurodevelopmental deficit through alteration of epigenetics. These epigenetic altering environmental factors will be discussed in the category of addictive agents, nutrition or diet, prescriptive medicine, environmental pollutant, and stress. Epigenetic alterations induced by these aversive environmental factors cover all aspects of epigenetics including DNA methylation, histone modification, noncoding RNA, and chromatin modification. Next, the mechanisms how these environmental inputs influence epigenetics will be discussed. Finally, how environmentally altered epigenetic marks affect neurodevelopment is exemplified by the alcohol-induced fetal alcohol syndrome. It is hoped that a thorough understanding of the nature of prenatal epigenetic inputs will enable researchers with a clear vision to better unravel neurodevelopmental deficit, late-onset neuropsychiatric diseases, or idiosyncratic mental disorders. © 2014 Elsevier Inc. All rights reserved.
Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel
2014-11-01
With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Reservoir computing with a single time-delay autonomous Boolean node
NASA Astrophysics Data System (ADS)
Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.
2015-02-01
We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.
Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Gotseff, Peter
2013-12-01
This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
Lü, Changwei; He, Jiang; Wang, Bing
2018-02-01
The chemistry of sedimentary organic phosphorus (OP) and its fraction distribution in sediments are greatly influenced by environmental conditions such as terrestrial inputs and runoffs. The linkage of OP with environmental conditions was analyzed on the basis of OP spatial and historical distributions in lake sediments. The redundancy analysis and OP spatial distribution results suggested that both NaOH-OP (OP extracted by NaOH) and Re-OP (residual OP) in surface sediments from the selected 13 lakes reflected the gradient effects of environmental conditions and the autochthonous and/or allochthonous inputs driven by latitude zonality in China. The lake level and salinity of Lake Hulun and the runoff and precipitation of its drainage basin were reconstructed on the basis of the geochemistry index. This work showed that a gradient in weather conditions presented by the latitude zonality in China impacts the OP accumulation through multiple drivers and in many ways. The drivers are mainly precipitation and temperature, governing organic matter (OM) production, degradation rate and transportation in the watershed. Over a long temporal dimension (4000years), the vertical distributions of Re-OP and NaOH-OP based on a dated sediment profile from HLH were largely regulated by the autochthonous and/or allochthonous inputs, which depended on the environmental and climate conditions and anthropogenic activities in the drainage basin. This work provides useful environmental geochemistry information to understand the inherent linkage of OP fractionation with environmental conditions and lake evolution. Copyright © 2017. Published by Elsevier B.V.
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong; Liang, Faming; Yu, Beibei
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less
Generative Representations for Evolving Families of Designs
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2003-01-01
Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.
NASA Technical Reports Server (NTRS)
Kiang, R.; Adimi, F.; Nigro, J.
2007-01-01
Meteorological and environmental parameters important to malaria transmission include temperature, relative humidity, precipitation, and vegetation conditions. These parameters can most conveniently be obtained using remote sensing. Selected provinces and districts in Thailand and Indonesia are used to illustrate how remotely sensed meteorological and environmental parameters may enhance the capabilities for malaria surveillance and control. Hindcastings based on these environmental parameters have shown good agreement to epidemiological records.
Dallas, Lorna J; Devos, Alexandre; Fievet, Bruno; Turner, Andrew; Lyons, Brett P; Jha, Awadhesh N
2016-05-01
Accurate dosimetry is critically important for ecotoxicological and radioecological studies on the potential effects of environmentally relevant radionuclides, such as tritium ((3)H). Previous studies have used basic dosimetric equations to estimate dose from (3)H exposure in ecologically important organisms, such as marine mussels. This study compares four different methods of estimating dose to adult mussels exposed to 1 or 15 MBq L(-1) tritiated water (HTO) under laboratory conditions. These methods were (1) an equation converting seawater activity concentrations to dose rate with fixed parameters; (2) input into the ERICA tool of seawater activity concentrations only; (3) input into the ERICA tool of estimated whole organism concentrations (woTACs), comprising dry activity plus estimated tissue free water tritium (TFWT) activity (TFWT volume × seawater activity concentration); and (4) input into the ERICA tool of measured whole organism activity concentrations, comprising dry activity plus measured TFWT activity (TFWT volume × TFWT activity concentration). Methods 3 and 4 are recommended for future ecotoxicological experiments as they produce values for individual animals and are not reliant on transfer predictions (estimation of concentration ratio). Method 1 may be suitable if measured whole organism concentrations are not available, as it produced results between 3 and 4. As there are technical complications to accurately measuring TFWT, we recommend that future radiotoxicological studies on mussels or other aquatic invertebrates measure whole organism activity in non-dried tissues (i.e. incorporating TFWT and dry activity as one, rather than as separate fractions) and input this data into the ERICA tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
Particle parameter analyzing system. [x-y plotter circuits and display
NASA Technical Reports Server (NTRS)
Hansen, D. O.; Roy, N. L. (Inventor)
1969-01-01
An X-Y plotter circuit apparatus is described which displays an input pulse representing particle parameter information, that would ordinarily appear on the screen of an oscilloscope as a rectangular pulse, as a single dot positioned on the screen where the upper right hand corner of the input pulse would have appeared. If another event occurs, and it is desired to display this event, the apparatus is provided to replace the dot with a short horizontal line.
Chasin, Marshall; Russo, Frank A.
2004-01-01
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
Solar energy system economic evaluation for Seeco Lincoln, Lincoln, Nebraska
NASA Technical Reports Server (NTRS)
1980-01-01
The economic analysis of the solar energy system that was installed at Lincoln, Nebraska is developed for this and four other sites typical of a wide range of environmental and economic conditions in the continental United States. This analysis is accomplished based on the technical and economic models in the f chart design procedure with inputs based on the characteristics of the installed system and local conditions. The results are expressed in terms of the economic parameters of present worth of system cost over projected twenty year life: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated.
Parametric analysis of ATT configurations.
NASA Technical Reports Server (NTRS)
Lange, R. H.
1972-01-01
This paper describes the results of a Lockheed parametric analysis of the performance, environmental factors, and economics of an advanced commercial transport envisioned for operation in the post-1985 time period. The design parameters investigated include cruise speeds from Mach 0.85 to Mach 1.0, passenger capacities from 200 to 500, ranges of 2800 to 5500 nautical miles, and noise level criteria. NASA high performance configurations and alternate configurations are operated over domestic and international route structures. Indirect and direct costs and return on investment are determined for approximately 40 candidate aircraft configurations. The candidate configurations are input to an aircraft sizing and performance program which includes a subroutine for noise criteria. Comparisons are made between preferred configurations on the basis of maximum return on investment as a function of payload, range, and design cruise speed.
An Approach to Risk-Based Design Incorporating Damage Tolerance Analyses
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Glaessgen, Edward H.; Sleight, David W.
2002-01-01
Incorporating risk-based design as an integral part of spacecraft development is becoming more and more common. Assessment of uncertainties associated with design parameters and environmental aspects such as loading provides increased knowledge of the design and its performance. Results of such studies can contribute to mitigating risk through a system-level assessment. Understanding the risk of an event occurring, the probability of its occurrence, and the consequences of its occurrence can lead to robust, reliable designs. This paper describes an approach to risk-based structural design incorporating damage-tolerance analysis. The application of this approach to a candidate Earth-entry vehicle is described. The emphasis of the paper is on describing an approach for establishing damage-tolerant structural response inputs to a system-level probabilistic risk assessment.
The development of a whole-body algorithm
NASA Technical Reports Server (NTRS)
Kay, F. J.
1973-01-01
The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.
Atmospheric Science Data Center
2017-01-13
... grid. Model inputs of cloud amounts and other atmospheric state parameters are also available in some of the data sets. Primary inputs to ... Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. SRB products are reformatted for the use of ...
Economic Input-Output Life Cycle Assessment of Water Reuse Strategies in Residential Buildings
This paper evaluates the environmental sustainability and economic feasibility of four water reuse designs through economic input-output life cycle assessments (EIO-LCA) and benefit/cost analyses. The water reuse designs include: 1. Simple Greywater Reuse System for Landscape Ir...
Marine Mammal Habitat in Ecuador: Seasonal Abundance and Environmental Distribution
2010-06-01
derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial Undercurrent and Peru Current...is initiated by the subsurface derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial
NASA Astrophysics Data System (ADS)
Zazzo, Antoine; Balasse, Marie; Patterson, William P.
2005-07-01
We present the first high-resolution carbon isotope and carbonate content profiles generated through the thickness of enamel from a steer fed C 3- then C 4-dominant food. Carbonate contents decrease by ˜2 wt% from the enamel surface to the innermost enamel layer, and each carbon isotope profile shows a mixture of enamel portions mineralized over several months. Downward and outward increasing contribution of C 4 food to the enamel δ 13C values reveal two components of the mineralization gradient: a vertical component from the tip of the tooth crown to the neck, and a horizontal component from the enamel-dentine junction to the outer enamel. We use our results to infer mineralization parameters for bovines and to calculate expected isotopic attenuations for an array of environmental inputs and microsampling strategies, using the model developed by Passey and Cerling [ Geochim. Cosmochim. Acta. 66 (2002) 3225-3234]. Although it seems unlikely that any strategy will perfectly isolate discrete time slices, sampling the innermost enamel layer might offer the advantage of significantly reducing the isotope damping that would become independent of the structure of the input signal.
Landslide Susceptibility Index Determination Using Aritificial Neural Network
NASA Astrophysics Data System (ADS)
Kawabata, D.; Bandibas, J.; Urai, M.
2004-12-01
The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic features, rock types and geologic structure are especially important base factors of the landslide occurrence. Generating landslide susceptibility index by defining the relationship between landslide occurrence and that base factors using conventional mathematical and statistical methods is very difficult and inaccurate. This study focuses on generating landslide susceptibility index using artificial neural networks in Southern Japanese Alps. The training data are geomorphic (e.g. altitude, slope and aspect) and geologic parameters (e.g. rock type, distance from geologic boundary and geologic dip-strike angle) and landslides. Artificial neural network structure and training scheme are formulated to generate the index. Data from areas with and without landslide occurrences are used to train the network. The network is trained to output 1 when the input data are from areas with landslides and 0 when no landslide occurred. The trained network generates an output ranging from 0 to 1 reflecting the possibility of landslide occurrence based on the inputted data. Output values nearer to 1 means higher possibility of landslide occurrence. The artificial neural network model is incorporated into the GIS software to generate a landslide susceptibility map.
78 FR 58520 - U.S. Environmental Solutions Toolkit
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
... notice sets forth a request for input from U.S. businesses capable of exporting their goods or services... and foreign end-users of environmental technologies The Toolkit outline U.S. approaches to a series of environmental problems and highlight participating U.S. vendors of relevant U.S. technologies. The Toolkit will...
Data for Environmental Modeling (D4EM): Background and Applications of Data Automation
The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
Kuo, Li-Jung; Louchouarn, Patrick; Herbert, Bruce E; Brandenberger, Jill M; Wade, Terry L; Crecelius, Eric
2011-04-01
Reconstructions of 250 years historical inputs of two distinct types of black carbon (soot/graphitic black carbon (GBC) and char-BC) were conducted on sediment cores from two basins of the Puget Sound, WA. Signatures of polycyclic aromatic hydrocarbons (PAHs) were also used to support the historical reconstructions of BC to this system. Down-core maxima in GBC and combustion-derived PAHs occurred in the 1940s in the cores from the Puget Sound Main Basin, whereas in Hood Canal such peak was observed in the 1970s, showing basin-specific differences in inputs of combustion byproducts. This system showed relatively higher inputs from softwood combustion than the northeastern U.S. The historical variations in char-BC concentrations were consistent with shifts in climate indices, suggesting an influence of climate oscillations on wildfire events. Environmental loading of combustion byproducts thus appears as a complex function of urbanization, fuel usage, combustion technology, environmental policies, and climate conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
NASA Astrophysics Data System (ADS)
Arnold, B. W.; Gardner, P.
2013-12-01
Calibration of groundwater flow models for the purpose of evaluating flow and aquifer heterogeneity typically uses observations of hydraulic head in wells and appropriate boundary conditions. Environmental tracers have a wide variety of decay rates and input signals in recharge, resulting in a potentially broad source of additional information to constrain flow rates and heterogeneity. A numerical study was conducted to evaluate the reduction in uncertainty during model calibration using observations of various environmental tracers and combinations of tracers. A synthetic data set was constructed by simulating steady groundwater flow and transient tracer transport in a high-resolution, 2-D aquifer with heterogeneous permeability and porosity using the PFLOTRAN software code. Data on pressure and tracer concentration were extracted at well locations and then used as observations for automated calibration of a flow and transport model using the pilot point method and the PEST code. Optimization runs were performed to estimate parameter values of permeability at 30 pilot points in the model domain for cases using 42 observations of: 1) pressure, 2) pressure and CFC11 concentrations, 3) pressure and Ar-39 concentrations, and 4) pressure, CFC11, Ar-39, tritium, and He-3 concentrations. Results show significantly lower uncertainty, as indicated by the 95% linear confidence intervals, in permeability values at the pilot points for cases including observations of environmental tracer concentrations. The average linear uncertainty range for permeability at the pilot points using pressure observations alone is 4.6 orders of magnitude, using pressure and CFC11 concentrations is 1.6 orders of magnitude, using pressure and Ar-39 concentrations is 0.9 order of magnitude, and using pressure, CFC11, Ar-39, tritium, and He-3 concentrations is 1.0 order of magnitude. Data on Ar-39 concentrations result in the greatest parameter uncertainty reduction because its half-life of 269 years is similar to the range of transport times (hundreds to thousands of years) in the heterogeneous synthetic aquifer domain. The slightly higher uncertainty range for the case using all of the environmental tracers simultaneously is probably due to structural errors in the model introduced by the pilot point regularization scheme. It is concluded that maximum information and uncertainty reduction for constraining a groundwater flow model is obtained using an environmental tracer whose half-life is well matched to the range of transport times through the groundwater flow system. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Ma, Dun-Chao; Hu, Shan-Ying; Chen, Ding-Jiang; Li, You-Run
2012-04-01
Substance flow analysis was used to construct a model to analyze change traits of China's phosphorous (P) consumption structure from 1980 to 2008 and their influences on environmental phosphorous loads, then the correlation between several socioeconomic factors and phosphorous consumption pollution was investigated. It is found that phosphorous nutrient inputs of urban life and rural life on a per capita level climbed to 1.20 kg x a(-1) and 0.99 kg x a(-1) from 0.83 kg x a(-1) and 0.75 kg x a(-1) respectively, but phosphorous recycling ratios of urban life fell to 15.6% from 62.6%. P inputs of animal husbandry and planting also kept increasing, but the recycling ratio of the former decreased from 67.5% to 40.5%, meanwhile much P input of the latter was left in agricultural soil. Correlation coefficients were all above 0.90, indicating that population, urbanization level, development levels of planting and animal husbandry were important incentives for P consumption pollution in China. Environmental Kuznets curve showed that China still stayed in the early development stage, promoting economic growth at an expense of environmental quality. This study demonstrates that China's P consumption system is being transformed into a linear and open structure, and that P nutrient loss and environmental P loads increase continually.
Pergola, M; D'Amico, M; Celano, G; Palese, A M; Scuderi, A; Di Vita, G; Pappalardo, G; Inglese, P
2013-10-15
The island of Sicily has a long standing tradition in citrus growing. We evaluated the sustainability of orange and lemon orchards, under organic and conventional farming, using an energy, environmental and economic analysis of the whole production cycle by using a life cycle assessment approach. These orchard systems differ only in terms of a few of the inputs used and the duration of the various agricultural operations. The quantity of energy consumption in the production cycle was calculated by multiplying the quantity of inputs used by the energy conversion factors drawn from the literature. The production costs were calculated considering all internal costs, including equipment, materials, wages, and costs of working capital. The performance of the two systems (organic and conventional), was compared over a period of fifty years. The results, based on unit surface area (ha) production, prove the stronger sustainability of the organic over the conventional system, both in terms of energy consumption and environmental impact, especially for lemons. The sustainability of organic systems is mainly due to the use of environmentally friendly crop inputs (fertilizers, not use of synthetic products, etc.). In terms of production costs, the conventional management systems were more expensive, and both systems were heavily influenced by wages. In terms of kg of final product, the organic production system showed better environmental and energy performances. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shin, K. H.; Kim, K. H.; Ki, S. J.; Lee, H. G.
2017-12-01
The vulnerability assessment tool at a Tier 1 level, although not often used for regulatory purposes, helps establish pollution prevention and management strategies in the areas of potential environmental concern such as soil and ground water. In this study, the Neural Network Pattern Recognition Tool embedded in MATLAB was used to allow the initial screening of soil and groundwater pollution based on data compiled across about 1000 previously contaminated sites in Korea. The input variables included a series of parameters which were tightly related to downward movement of water and contaminants through soil and ground water, whereas multiple classes were assigned to the sum of concentrations of major pollutants detected. Results showed that in accordance with diverse pollution indices for soil and ground water, pollution levels in both media were strongly modulated by site-specific characteristics such as intrinsic soil and other geologic properties, in addition to pollution sources and rainfall. However, classification accuracy was very sensitive to the number of classes defined as well as the types of the variables incorporated, requiring careful selection of input variables and output categories. Therefore, we believe that the proposed methodology is used not only to modify existing pollution indices so that they are more suitable for addressing local vulnerability, but also to develop a unique assessment tool to support decision making based on locally or nationally available data. This study was funded by a grant from the GAIA project(2016000560002), Korea Environmental Industry & Technology Institute, Republic of Korea.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Evaluation of FEM engineering parameters from insitu tests
DOT National Transportation Integrated Search
2001-12-01
The study looked critically at insitu test methods (SPT, CPT, DMT, and PMT) as a means for developing finite element constitutive model input parameters. The first phase of the study examined insitu test derived parameters with laboratory triaxial te...
ENVIRONMENTAL STATISTICS INITIATIVE
EPA's Center of Excellence (COE) for Environmental Computational Science is intended to integrate cutting-edge science and emerging information technology (IT) solutions for input to the decision-making process. Complementing the research goals of EPA's COE, the NERL has initiat...
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Enhancement of CFD validation exercise along the roof profile of a low-rise building
NASA Astrophysics Data System (ADS)
Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.
2018-04-01
The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1996-01-01
This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.
NASA Astrophysics Data System (ADS)
Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta
2013-11-01
A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry
Brown, Richard J. C.
2008-01-01
The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
75 FR 44930 - Stakeholder Input; Revisions to Water Quality Standards Regulation
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-30
... Input; Revisions to Water Quality Standards Regulation AGENCY: Environmental Protection Agency. ACTION... national rulemaking to make a limited set of targeted changes to EPA's water quality standards regulation... rulemaking, and to hear views from the public regarding possible changes to EPA's water quality standards...
Monitoring water quality in Northwest Atlantic coastal waters using dinoflagellate cysts
Nutrient pollution is a major environmental problem in many coastal waters around the US. Determining the total input of nutrients to estuaries is a challenge. One method to evaluate nutrient input is through nutrient loading models. Another method relies upon using indicators as...
From field to region yield predictions in response to pedo-climatic variations in Eastern Canada
NASA Astrophysics Data System (ADS)
JÉGO, G.; Pattey, E.; Liu, J.
2013-12-01
The increase in global population coupled with new pressures to produce energy and bioproducts from agricultural land requires an increase in crop productivity. However, the influence of climate and soil variations on crop production and environmental performance is not fully understood and accounted for to define more sustainable and economical management strategies. Regional crop modeling can be a great tool for understanding the impact of climate variations on crop production, for planning grain handling and for assessing the impact of agriculture on the environment, but it is often limited by the availability of input data. The STICS ("Simulateur mulTIdisciplinaire pour les Cultures Standard") crop model, developed by INRA (France) is a functional crop model which has a built-in module to optimize several input parameters by minimizing the difference between calculated and measured output variables, such as Leaf Area Index (LAI). STICS crop model was adapted to the short growing season of the Mixedwood Plains Ecozone using field experiments results, to predict biomass and yield of soybean, spring wheat and corn. To minimize the numbers of inference required for regional applications, 'generic' cultivars rather than specific ones have been calibrated in STICS. After the calibration of several model parameters, the root mean square error (RMSE) of yield and biomass predictions ranged from 10% to 30% for the three crops. A bit more scattering was obtained for LAI (20%
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
Engine control techniques to account for fuel effects
Kumar, Shankar; Frazier, Timothy R.; Stanton, Donald W.; Xu, Yi; Bunting, Bruce G.; Wolf, Leslie R.
2014-08-26
A technique for engine control to account for fuel effects including providing an internal combustion engine and a controller to regulate operation thereof, the engine being operable to combust a fuel to produce an exhaust gas; establishing a plurality of fuel property inputs; establishing a plurality of engine performance inputs; generating engine control information as a function of the fuel property inputs and the engine performance inputs; and accessing the engine control information with the controller to regulate at least one engine operating parameter.
NASA Astrophysics Data System (ADS)
Caruso, T.; Rühl, J.; Sciortino, R.; Marra, F. P.; La Scalia, G.
2014-10-01
The Common Agricultural Policy of the European Union grants subsidies for olive production. Areas of intensified olive farming will be of major importance for the increasing demand for oil production of the next decades, and countries with a high ratio of intensively and super-intensively managed olive groves will be more competitive than others, since they are able to reduce production costs. It can be estimated that about 25-40% of the Sicilian oliviculture must be defined as "marginal". Modern olive cultivation systems, which permit the mechanization of pruning and harvest operations, are limited. Agronomists, landscape planners, policy decision-makers and other professionals have a growing need for accurate and cost-effective information on land use in general and agronomic parameters in the particular. The availability of high spatial resolution imagery has enabled researchers to propose analysis tools on agricultural parcel and tree level. In our study, we test the performance of WorldView-2 imagery relative to the detection of olive groves and the delineation of olive tree crowns, using an object-oriented approach of image classification in combined use with LIDAR data. We selected two sites, which differ in their environmental conditions and in their agronomic parameters of olive grove cultivation. The main advantage of the proposed methodology is the low necessary quantity of data input and its automatibility. However, it should be applied in other study areas to test if the good results of accuracy assessment can be confirmed. Data extracted by the proposed methodology can be used as input data for decision-making support systems for olive grove management.
Positional glow curve simulation for thermoluminescent detector (TLD) system design
NASA Astrophysics Data System (ADS)
Branch, C. J.; Kearfott, K. J.
1999-02-01
Multi- and thin element dosimeters, variable heating rate schemes, and glow-curve analysis have been employed to improve environmental and personnel dosimetry using thermoluminescent detectors (TLDs). Detailed analysis of the effects of errors and optimization of techniques would be highly desirable. However, an understanding of the relationship between TL light production, light attenuation, and precise heating schemes is made difficult because of experimental challenges involved in measuring positional TL light production and temperature variations as a function of time. This work reports the development of a general-purpose computer code, thermoluminescent detector simulator, TLD-SIM, to simulate the heating of any TLD type using a variety of conventional and experimental heating methods including pulsed focused or unfocused lasers with Gaussian or uniform cross sections, planchet, hot gas, hot finger, optical, infrared, or electrical heating. TLD-SIM has been used to study the impact on the TL light production of varying the input parameters which include: detector composition, heat capacity, heat conductivity, physical size, and density; trapped electron density, the frequency factor of oscillation of electrons in the traps, and trap-conduction band potential energy difference; heating scheme source terms and heat transfer boundary conditions; and TL light scatter and attenuation coefficients. Temperature profiles and glow curves as a function of position time, as well as the corresponding temporally and/or spatially integrated glow values, may be plotted while varying any of the input parameters. Examples illustrating TLD system functions, including glow curve variability, will be presented. The flexible capabilities of TLD-SIM promises to enable improved TLD system design.
Silvestro, Paolo Cosmo; Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio; Casa, Raffaele
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations.
Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations. PMID:29107963
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.E.; Roussin, R.W.; Gilpin, H.
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less
Sand dredging and environmental efficiency of artisanal fishermen in Lagos state, Nigeria.
Sowunmi, Fatai A; Hogarh, Jonathan N; Agbola, Peter O; Atewamba, Calvin
2016-03-01
Environmentally detrimental input (water turbidity) and conventional production inputs were considered within the framework of stochastic frontier analysis to estimate technical and environmental efficiencies of fishermen in sand dredging and non-dredging areas. Environmental efficiency was low among fishermen in the sand dredging areas. Educational status and experience in fishing and sand dredging were the factors influencing environmental efficiency in the sand dredging areas. Average quantity of fish caught per labour- hour was higher among fishermen in the non-dredging areas. Fishermen in the fishing community around the dredging areas travelled long distance in order to reduce the negative effect of sand dredging on their fishing activity. The study affirmed large household size among fishermen. The need to regulate the activities of sand dredgers by restricting license for sand dredging to non-fishing communities as well as intensifying family planning campaign in fishing communities to reduce the negative effect of high household size on fishing is imperative for the sustainability of artisanal fishing.
Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual
1988-12-01
corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21
Thermophysical properties of hydrophobised lime plaster - Experimental analysis of moisture effect
NASA Astrophysics Data System (ADS)
Pavlíková, Milena; Pernicová, Radka; Pavlík, Zbyšek
2016-07-01
Lime plasters are the most popular finishing materials in renewal of historical buildings and culture monuments. Because of their limited durability, new materials and design solutions are investigated in order to improve plasters performance in harmful environmental conditions. For the practical use, the plasters mechanical resistivity and the compatibility with substrate are the most decisive material parameters. However, also plasters hygric and thermal parameters affecting the overall hygrothermal function of the renovated structures are of the particular importance. On this account, the effect of moisture content on the thermophysical properties of a newly designed lime plasters containing hydrophobic admixture is analysed in the paper. For the comparative purposes, the reference lime and cement-lime plasters are tested as well. Basic characterization of the tested materials is done using bulk density, matrix density, and porosity measurements. Thermal conductivity and volumetric heat capacity in the broad range of moisture content are experimentally accessed using a transient impulse method. The obtained data reveals the significant increase of the both studied thermal parameters with increasing moisture content and gives information on plasters behaviour in a highly humid environment and/or in the case of their possible direct contact with liquid water. The accessed material parameters will be stored in a material database, where can find use as an input data for computational modelling of coupled heat and moisture transport in this type of porous building materials.
Gussmann, Maya; Kirkeby, Carsten; Græsbøll, Kaare; Farre, Michael; Halasa, Tariq
2018-07-14
Intramammary infections (IMI) in dairy cattle lead to economic losses for farmers, both through reduced milk production and disease control measures. We present the first strain-, cow- and herd-specific bio-economic simulation model of intramammary infections in a dairy cattle herd. The model can be used to investigate the cost-effectiveness of different prevention and control strategies against IMI. The objective of this study was to describe a transmission framework, which simulates spread of IMI causing pathogens through different transmission modes. These include the traditional contagious and environmental spread and a new opportunistic transmission mode. In addition, the within-herd transmission dynamics of IMI causing pathogens were studied. Sensitivity analysis was conducted to investigate the influence of input parameters on model predictions. The results show that the model is able to represent various within-herd levels of IMI prevalence, depending on the simulated pathogens and their parameter settings. The parameters can be adjusted to include different combinations of IMI causing pathogens at different prevalence levels, representing herd-specific situations. The model is most sensitive to varying the transmission rate parameters and the strain-specific recovery rates from IMI. It can be used for investigating both short term operational and long term strategic decisions for the prevention and control of IMI in dairy cattle herds. Copyright © 2018 Elsevier Ltd. All rights reserved.
Integrated modeling approach for optimal management of water, energy and food security nexus
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Vesselinov, Velimir V.
2017-03-01
Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.
Norms and values in sociohydrological models
NASA Astrophysics Data System (ADS)
Roobavannan, Mahendran; van Emmerik, Tim H. M.; Elshafei, Yasmina; Kandasamy, Jaya; Sanderson, Matthew R.; Vigneswaran, Saravanamuthu; Pande, Saket; Sivapalan, Murugesu
2018-02-01
Sustainable water resources management relies on understanding how societies and water systems coevolve. Many place-based sociohydrology (SH) modeling studies use proxies, such as environmental degradation, to capture key elements of the social component of system dynamics. Parameters of assumed relationships between environmental degradation and the human response to it are usually obtained through calibration. Since these relationships are not yet underpinned by social-science theories, confidence in the predictive power of such place-based sociohydrologic models remains low. The generalizability of SH models therefore requires major advances in incorporating more realistic relationships, underpinned by appropriate hydrological and social-science data and theories. The latter is a critical input, since human culture - especially values and norms arising from it - influences behavior and the consequences of behaviors. This paper reviews a key social-science theory that links cultural factors to environmental decision-making, assesses how to better incorporate social-science insights to enhance SH models, and raises important questions to be addressed in moving forward. This is done in the context of recent progress in sociohydrological studies and the gaps that remain to be filled. The paper concludes with a discussion of challenges and opportunities in terms of generalization of SH models and the use of available data to allow future prediction and model transfer to ungauged basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, Denis; Montano, Marcelo; Ranieri, Victor Eduardo Lima
2007-05-15
This article discusses the limitations and implications to environmental management issues posed by the Environmental Licensing approach adopted in Sao Paulo State. In Brazil, Environmental Impact Assessment (EIA) is an essential precondition to the Environmental Licensing of activities and, in fact, it has been the most important and required tool for the licensing of projects. However, in 1994 the State of Sao Paulo implemented a simplified instrument called a 'Preliminary Environmental Report' in order to make the environmental licensing process faster. Since then, the Preliminary Environmental Report (PER) has had the role of indicating whether an EIA needs to bemore » elaborated upon or not. The positives and negatives regarding technical, institutional and legal aspects related to the use of Preliminary Environmental Reports (rather than EIA) are discussed using the case study of a high-pressure natural gas pipeline between the cities of Sao Carlos and Porto Ferreira in the State of Sao Paulo. The main conclusion is that the Environmental Licensing process in Sao Paulo should not use PERs as the sole input to decision making about proposed activities, since the PER approach does not guarantee that the proposed activity is environmentally suitable, does not address locational issues or comparison of alternatives, and risk assessment issues are not considered in the earliest stages of assessment.« less
Soil organic carbon sequestration and tillage systems in Mediterranean environments
NASA Astrophysics Data System (ADS)
Francaviglia, Rosa; Di Bene, Claudia; Marchetti, Alessandro; Farina, Roberta
2016-04-01
Soil carbon sequestration is of special interest in Mediterranean areas, where rainfed cropping systems are prevalent, inputs of organic matter to soils are low and mostly rely on crop residues, while losses are high due to climatic and anthropic factors such as intensive and non-conservative farming practices. The adoption of reduced or no tillage systems, characterized by a lower soil disturbance in comparison with conventional tillage, has proved to be positively effective on soil organic carbon (SOC) conservation and other physical and chemical processes, parameters or functions, e.g. erosion, compaction, ion retention and exchange, buffering capacity, water retention and aggregate stability. Moreover, soil biological and biochemical processes are usually improved by the reduction of tillage intensity. The work deals with some results available in the scientific literature, and related to field experiment on arable crops performed in Italy, Greece, Morocco and Spain. Data were organized in a dataset containing the main environmental parameters (altitude, temperature, rainfall), soil tillage system information (conventional, minimum and no-tillage), soil parameters (bulk density, pH, particle size distribution and texture), crop type, rotation, management and length of the experiment in years, initial SOCi and final SOCf stocks. Sampling sites are located between 33° 00' and 43° 32' latitude N, 2-860 m a.s.l., with mean annual temperature and rainfall in the range 10.9-19.6° C and 355-900 mm. SOC data, expressed in t C ha-1, have been evaluated both in terms of Carbon Sequestration Rate, given by [(SOCf-SOCi)/length in years], and as percentage change in comparison with the initial value [(SOCf-SOCi)/SOCi*100]. Data variability due to the different environmental, soil and crop management conditions that influence SOC sequestration and losses will be examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yiqi; Ahlström, Anders; Allison, Steven D.
Soil carbon (C) is a critical component of Earth system models (ESMs) and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the 3rd to 5th assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe themore » environmental conditions that soils experience. Firstly, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by 1st-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic SOC dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Secondly, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based datasets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Thirdly, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable datasets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Toward more realistic projections of soil carbon dynamics by Earth system models
Luo, Y.; Ahlström, Anders; Allison, Steven D.; Batjes, Niels H.; Brovkin, V.; Carvalhais, Nuno; Chappell, Adrian; Ciais, Philippe; Davidson, Eric A.; Finzi, Adien; Georgiou, Katerina; Guenet, Bertrand; Hararuk, Oleksandra; Harden, Jennifer; He, Yujie; Hopkins, Francesca; Jiang, L.; Koven, Charles; Jackson, Robert B.; Jones, Chris D.; Lara, M.; Liang, J.; McGuire, A. David; Parton, William; Peng, Changhui; Randerson, J.; Salazar, Alejandro; Sierra, Carlos A.; Smith, Matthew J.; Tian, Hanqin; Todd-Brown, Katherine E. O; Torn, Margaret S.; van Groenigen, Kees Jan; Wang, Ying; West, Tristram O.; Wei, Yaxing; Wieder, William R.; Xia, Jianyang; Xu, Xia; Xu, Xiaofeng; Zhou, T.
2016-01-01
Soil carbon (C) is a critical component of Earth system models (ESMs), and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the third to fifth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe the environmental conditions that soils experience. First, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by first-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic soil organic C (SOC) dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Second, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based data sets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Third, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable data sets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.
NASA Astrophysics Data System (ADS)
Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.
2008-07-01
To assess global water resources from the perspective of subannual variation in water availability and water use, an integrated water resources model was developed. In a companion report, we presented the global meteorological forcing input used to drive the model and six modules, namely, the land surface hydrology module, the river routing module, the crop growth module, the reservoir operation module, the environmental flow requirement module, and the anthropogenic withdrawal module. Here, we present the results of the model application and global water resources assessments. First, the timing and volume of simulated agriculture water use were examined because agricultural use composes approximately 85% of total consumptive water withdrawal in the world. The estimated crop calendar showed good agreement with earlier reports for wheat, maize, and rice in major countries of production. In major countries, the error in the planting date was ±1 mo, but there were some exceptional cases. The estimated irrigation water withdrawal also showed fair agreement with country statistics, but tended to be underestimated in countries in the Asian monsoon region. The results indicate the validity of the model and the input meteorological forcing because site-specific parameter tuning was not used in the series of simulations. Finally, global water resources were assessed on a subannual basis using a newly devised index. This index located water-stressed regions that were undetected in earlier studies. These regions, which are indicated by a gap in the subannual distribution of water availability and water use, include the Sahel, the Asian monsoon region, and southern Africa. The simulation results show that the reservoir operations of major reservoirs (>1 km3) and the allocation of environmental flow requirements can alter the population under high water stress by approximately -11% to +5% globally. The integrated model is applicable to assessments of various global environmental projections such as climate change.
VISIR-I: small vessels - least-time nautical routes using wave forecasts
NASA Astrophysics Data System (ADS)
Mannarini, Gianandrea; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Iafrati, Alessandro
2016-05-01
A new numerical model for the on-demand computation of optimal ship routes based on sea-state forecasts has been developed. The model, named VISIR (discoVerIng Safe and effIcient Routes) is designed to support decision-makers when planning a marine voyage. The first version of the system, VISIR-I, considers medium and small motor vessels with lengths of up to a few tens of metres and a displacement hull. The model is comprised of three components: a route optimization algorithm, a mechanical model of the ship, and a processor of the environmental fields. The optimization algorithm is based on a graph-search method with time-dependent edge weights. The algorithm is also able to compute a voluntary ship speed reduction. The ship model accounts for calm water and added wave resistance by making use of just the principal particulars of the vessel as input parameters. It also checks the optimal route for parametric roll, pure loss of stability, and surfriding/broaching-to hazard conditions. The processor of the environmental fields employs significant wave height, wave spectrum peak period, and wave direction forecast fields as input. The topological issues of coastal navigation (islands, peninsulas, narrow passages) are addressed. Examples of VISIR-I routes in the Mediterranean Sea are provided. The optimal route may be longer in terms of miles sailed and yet it is faster and safer than the geodetic route between the same departure and arrival locations. Time savings up to 2.7 % and route lengthening up to 3.2 % are found for the case studies analysed. However, there is no upper bound for the magnitude of the changes of such route metrics, which especially in case of extreme sea states can be much greater. Route diversions result from the safety constraints and the fact that the algorithm takes into account the full temporal evolution and spatial variability of the environmental fields.
NASA Technical Reports Server (NTRS)
Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.
Milosevic, Igor; Naunovic, Zorana
2013-10-01
This article presents a process of evaluation and selection of the most favourable location for a sanitary landfill facility from three alternative locations, by applying a multi-criteria decision-making (MCDM) method. An incorrect choice of location for a landfill facility can have a significant negative economic and environmental impact, such as the pollution of air, ground and surface waters. The aim of this article is to present several improvements in the practical process of landfill site selection using the VIKOR MCDM compromise ranking method integrated with a fuzzy analytic hierarchy process approach for determining the evaluation criteria weighing coefficients. The VIKOR method focuses on ranking and selecting from a set of alternatives in the presence of conflicting and non-commensurable (different units) criteria, and on proposing a compromise solution that is closest to the ideal solution. The work shows that valuable site ranking lists can be obtained using the VIKOR method, which is a suitable choice when there is a large number of relevant input parameters.
A humidity sensing organic-inorganic composite for environmental monitoring.
Ahmad, Zubair; Zafar, Qayyum; Sulaiman, Khaulah; Akram, Rizwan; Karimov, Khasan S
2013-03-14
In this paper, we present the effect of varying humidity levels on the electrical parameters and the multi frequency response of the electrical parameters of an organic-inorganic composite (PEPC+NiPc+Cu2O)-based humidity sensor. Silver thin films (thickness ~200 nm) were primarily deposited on plasma cleaned glass substrates by the physical vapor deposition (PVD) technique. A pair of rectangular silver electrodes was formed by patterning silver film through standard optical lithography technique. An active layer of organic-inorganic composite for humidity sensing was later spun coated to cover the separation between the silver electrodes. The electrical characterization of the sensor was performed as a function of relative humidity levels and frequency of the AC input signal. The sensor showed reversible changes in its capacitance with variations in humidity level. The maximum sensitivity ~31.6 pF/%RH at 100 Hz in capacitive mode of operation has been attained. The aim of this study was to increase the sensitivity of the previously reported humidity sensors using PEPC and NiPc, which has been successfully achieved.
A Humidity Sensing Organic-Inorganic Composite for Environmental Monitoring
Ahmad, Zubair; Zafar, Qayyum; Sulaiman, Khaulah; Akram, Rizwan; Karimov, Khasan S.
2013-01-01
In this paper, we present the effect of varying humidity levels on the electrical parameters and the multi frequency response of the electrical parameters of an organic-inorganic composite (PEPC+NiPc+Cu2O)-based humidity sensor. Silver thin films (thickness ∼200 nm) were primarily deposited on plasma cleaned glass substrates by the physical vapor deposition (PVD) technique. A pair of rectangular silver electrodes was formed by patterning silver film through standard optical lithography technique. An active layer of organic-inorganic composite for humidity sensing was later spun coated to cover the separation between the silver electrodes. The electrical characterization of the sensor was performed as a function of relative humidity levels and frequency of the AC input signal. The sensor showed reversible changes in its capacitance with variations in humidity level. The maximum sensitivity ∼31.6 pF/%RH at 100 Hz in capacitive mode of operation has been attained. The aim of this study was to increase the sensitivity of the previously reported humidity sensors using PEPC and NiPc, which has been successfully achieved. PMID:23493124
Sustainability assessment of shielded metal arc welding (SMAW) process
NASA Astrophysics Data System (ADS)
Alkahla, Ibrahim; Pervaiz, Salman
2017-09-01
Shielded metal arc welding (SMAW) process is one of the most commonly employed material joining processes utilized in the various industrial sectors such as marine, ship-building, automotive, aerospace, construction and petrochemicals etc. The increasing pressure on manufacturing sector wants the welding process to be sustainable in nature. The SMAW process incorporates several types of inputs and output streams. The sustainability concerns associated with SMAW process are linked with the various input and output streams such as electrical energy requirement, input material consumptions, slag formation, fumes emission and hazardous working conditions associated with the human health and occupational safety. To enhance the environmental performance of the SMAW welding process, there is a need to characterize the sustainability for the SMAW process under the broad framework of sustainability. Most of the available literature focuses on the technical and economic aspects of the welding process, however the environmental and social aspects are rarely addressed. The study reviews SMAW process with respect to the triple bottom line (economic, environmental and social) sustainability approach. Finally, the study concluded recommendations towards achieving economical and sustainable SMAW welding process.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.
Proceedings: Sixth Annual Workshop on Meteorological and Environmental Inputs to Aviation Systems
NASA Technical Reports Server (NTRS)
Frost, W. (Editor); Camp, D. W. (Editor); Hershman, L. W. (Editor)
1983-01-01
The topics of interaction of the atmosphere with aviation systems, the better definition and implementation of services to operators, and the collection and interpretation of data for establishing operational criteria relating the total meteorological inputs from the atmospheric sciences to the needs of aviation communities were addressed.
DOT National Transportation Integrated Search
2009-02-01
The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the d...
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Effect of Burnishing Parameters on Surface Finish
NASA Astrophysics Data System (ADS)
Shirsat, Uddhav; Ahuja, Basant; Dhuttargaon, Mukund
2017-08-01
Burnishing is cold working process in which hard balls are pressed against the surface, resulting in improved surface finish. The surface gets compressed and then plasticized. This is a highly finishing process which is becoming more popular. Surface quality of the product improves its aesthetic appearance. The product made up of aluminum material is subjected to burnishing process during which kerosene is used as a lubricant. In this study factors affecting burnishing process such as burnishing force, speed, feed, work piece diameter and ball diameter are considered as input parameters while surface finish is considered as an output parameter In this study, experiments are designed using 25 factorial design in order to analyze the relationship between input and output parameters. The ANOVA technique and F-test are used for further analysis.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
Calibration under uncertainty for finite element models of masonry monuments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin
2010-02-01
Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, andmore » there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.« less
NASA Astrophysics Data System (ADS)
Yan, Zilin; Kim, Yongtae; Hara, Shotaro; Shikazono, Naoki
2017-04-01
The Potts Kinetic Monte Carlo (KMC) model, proven to be a robust tool to study all stages of sintering process, is an ideal tool to analyze the microstructure evolution of electrodes in solid oxide fuel cells (SOFCs). Due to the nature of this model, the input parameters of KMC simulations such as simulation temperatures and attempt frequencies are difficult to identify. We propose a rigorous and efficient approach to facilitate the input parameter calibration process using artificial neural networks (ANNs). The trained ANN reduces drastically the number of trial-and-error of KMC simulations. The KMC simulation using the calibrated input parameters predicts the microstructures of a La0.6Sr0.4Co0.2Fe0.8O3 cathode material during sintering, showing both qualitative and quantitative congruence with real 3D microstructures obtained by focused ion beam scanning electron microscopy (FIB-SEM) reconstruction.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
Neuner, B; Berger, K
2010-11-01
Apart from individual resources and individual risk factors, environmental socioeconomic factors are determinants of individual health and illness. The aim of this investigation was to evaluate the association of small-area environmental socioeconomic parameters (proportion of 14-year-old and younger population, proportion of married citizens, proportion of unemployed, and the number of private cars per inhabitant) with individual socioeconomic parameters (education, income, unemployment, social class and the country of origin) in Dortmund, a major city in Germany. After splitting the small-area environmental socioeconomic parameters of 62 statistical administration units into quintiles, differences in the distribution of individual social parameters were evaluated using adjusted tests for trend. Overall, 1,312 study participants (mean age 53.6 years, 52.9% women) were included. Independently of age and gender, individual social parameters were unequally distributed across areas with different small-area environmental socioeconomic parameters. A place of birth abroad and social class were significantly associated with all small-area environmental socioeconomic parameters. If the impact of environmental socioeconomic parameters on individual health or illness is determined, the unequal small-area distribution of individual social parameters should be considered. © Georg Thieme Verlag KG Stuttgart · New York.
Cocoa based agroforestry: An economic perspective in resource scarcity conflict era
NASA Astrophysics Data System (ADS)
Jumiyati, S.; Arsyad, M.; Rajindra; Pulubuhu, D. A. T.; Hadid, A.
2018-05-01
Agricultural development towards food self-sufficiency based on increasing production alone has caused the occurrence of environmental disasters that are the impact of the exploitation of natural resources resulting in the scarcity of resources. This paper describes the optimization of land area, revenue, cost (production inputs), income and use of production input based on economic and ecological aspects. In order to sustainability farming by integrating environmental and economic consideration can be made through farmers’ decision making with the goal of optimizing revenue based on cost optimization through cocoa based agroforestry model in order to cope with a resource conflict resolution.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Olsson, Oliver; Höper, Heinrich; Hamscher, Gerd; Kümmerer, Klaus
2016-04-01
The simulation of reactive transport in the aquatic environment is hampered by the ambiguity of environmental fate process conceptualizations for a specific substance in the literature. Concepts are usually identified by experimental studies and inverse modelling under controlled lab conditions in order to reduce environmental uncertainties such as uncertain boundary conditions and input data. However, since environmental conditions affect substance behaviour, a re-evaluation might be necessary under environmental conditions which might, in turn, be affected by uncertainties. Using a combination of experimental data and simulations of the leaching behaviour of the veterinary antibiotic Sulfamethazine (SMZ; synonym: sulfadimidine) and the hydrological tracer Bromide (Br) in a field lysimeter, we re-evaluated the sorption concepts of both substances under uncertain field conditions. Sampling data of a field lysimeter experiment in which both substances were applied twice a year with manure and sampled at the bottom of two lysimeters during three subsequent years was used for model set-up and evaluation. The total amount of leached SMZ and Br were 22 μg and 129 mg, respectively. A reactive transport model was parameterized to the conditions of the two lysimeters filled with monoliths (depth 2 m, area 1 m²) of a sandy soil showing a low pH value under which Bromide is sorptive. We used different sorption concepts such as constant and organic-carbon dependent sorption coefficients and instantaneous and kinetic sorption equilibrium. Combining the sorption concepts resulted in four scenarios per substance with different equations for sorption equilibrium and sorption kinetics. The GLUE (Generalized Likelihood Uncertainty Estimation) method was applied to each scenario using parameter ranges found in experimental and modelling studies. The parameter spaces for each scenario were sampled using a Latin Hypercube method which was refined around local model efficiency maxima. Results of the cumulative SMZ leaching simulations suggest a best conceptualization combination of instantaneous sorption to organic carbon which is consistent with the literature. The best Nash-Sutcliffe efficiency (Neff) was 0.96 and the 5th and 95th percentile of the uncertainty estimation were 18 and 27 μg. In contrast, both scenarios of kinetic Br sorption had similar results (Neff =0.99, uncertainty bounds 110-176 mg and 112-176 mg) but were clearly better than instantaneous sorption scenarios. Therefore, only the concept of sorption kinetics could be identified for Br modelling whereas both tested sorption equilibrium coefficient concepts performed equally well. The reasons for this specific case of equifinality may be uncertainties of model input data under field conditions or an insensitivity of the sorption equilibrium method due to relatively low adsorption of Br. Our results show that it may be possible to identify or at least falsify specific sorption concepts under uncertain field conditions using a long-term leaching experiment and modelling methods. Cases of environmental fate concept equifinality arouse the possibility of future model structure uncertainty analysis using an ensemble of models with different environmental fate concepts.
Uncertainty in predictions of oil spill trajectories in a coastal zone
NASA Astrophysics Data System (ADS)
Sebastião, P.; Guedes Soares, C.
2006-12-01
A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1975-01-01
The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Extrapolation of sonic boom pressure signatures by the waveform parameter method
NASA Technical Reports Server (NTRS)
Thomas, C. L.
1972-01-01
The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Structural equation modeling in environmental risk assessment.
Buncher, C R; Succop, P A; Dietrich, K N
1991-01-01
Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.
DEPOT: A Database of Environmental Parameters, Organizations and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARSON,SUSAN D.; HUNTER,REGINA LEE; MALCZYNSKI,LEONARD A.
2000-12-19
The Database of Environmental Parameters, Organizations, and Tools (DEPOT) has been developed by the Department of Energy (DOE) as a central warehouse for access to data essential for environmental risk assessment analyses. Initial efforts have concentrated on groundwater and vadose zone transport data and bioaccumulation factors. DEPOT seeks to provide a source of referenced data that, wherever possible, includes the level of uncertainty associated with these parameters. Based on the amount of data available for a particular parameter, uncertainty is expressed as a standard deviation or a distribution function. DEPOT also provides DOE site-specific performance assessment data, pathway-specific transport data,more » and links to environmental regulations, disposal site waste acceptance criteria, other environmental parameter databases, and environmental risk assessment models.« less
The next green movement: Plant biology for the environment and sustainability.
Jez, Joseph M; Lee, Soon Goo; Sherp, Ashley M
2016-09-16
From domestication and breeding to the genetic engineering of crops, plants provide food, fuel, fibers, and feedstocks for our civilization. New research and discoveries aim to reduce the inputs needed to grow crops and to develop plants for environmental and sustainability applications. Faced with population growth and changing climate, the next wave of innovation in plant biology integrates technologies and approaches that span from molecular to ecosystem scales. Recent efforts to engineer plants for better nitrogen and phosphorus use, enhanced carbon fixation, and environmental remediation and to understand plant-microbiome interactions showcase exciting future directions for translational plant biology. These advances promise new strategies for the reduction of inputs to limit environmental impacts and improve agricultural sustainability. Copyright © 2016, American Association for the Advancement of Science.