Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
The successful use of the Exposure Related Dose Estimating Model (ERDEM) for assessment of dermal exposure of humans to OP pesticides requires the input of representative and comparable input parameters. In the specific case of dermal exposure, regional anatomical variation in...
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.
Zheng, Jiajia; Huynh, Trang; Gasparon, Massimo; Ng, Jack; Noller, Barry
2013-12-01
Lead from historical mining and mineral processing activities may pose potential human health risks if materials with high concentrations of bioavailable lead minerals are released to the environment. Since the Joint Expert Committee on Food Additives of Food and Agriculture Organization/World Health Organization withdrew the Provisional Tolerable Weekly Intake of lead in 2011, an alternative method was required for lead exposure assessment. This study evaluated the potential lead hazard to young children (0-7 years) from a historical mining location at a semi-arid area using the U.S. EPA Integrated Exposure Uptake Biokinetic (IEUBK) Model, with selected site-specific input data. This study assessed lead exposure via the inhalation pathway for children living in a location affected by lead mining activities and with specific reference to semi-arid conditions and made comparison with the ingestion pathway by using the physiologically based extraction test for gastro-intestinal simulation. Sensitivity analysis for major IEUBK input parameters was conducted. Three groups of input parameters were classified according to the results of predicted blood concentrations. The modelled lead absorption attributed to the inhalation route was lower than 2 % (mean ± SE, 0.9 % ± 0.1 %) of all lead intake routes and was demonstrated as a less significant exposure pathway to children's blood, compared with ingestion. Whilst dermal exposure was negligible, diet and ingestion of soil and dust were the dominant parameters in terms of children's blood lead prediction. The exposure assessment identified the changing role of dietary intake when house lead loadings varied. Recommendations were also made to conduct comprehensive site-specific human health risk assessment in future studies of lead exposure under a semi-arid climate.
Parameter uncertainty and variability in evaluative fate and exposure models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertwich, E.G.; McKone, T.E.; Pease, W.S.
The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows one to focus resources if the uncertainty is to be reduced. The potentialmore » does of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.« less
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
78 FR 76521 - Risk-Based Capital Guidelines; Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... ``[t]his means that the change has no practical impact on the rules that apply to the provision of... delinquency of the underlying exposures as discussed below. Among the inputs to the SSFA is the ``W'' parameter, which increases the capital requirements for a securitization exposure when delinquencies in the...
Conditions affecting boundary response to messages out of awareness.
Fisher, S
1976-05-01
Multiple studies evaluated the role of the following parameters in mediating the effects of auditory subliminal inputs upon the body boundary: being made aware that exposure to subliminal stimuli is occurring, nature of the priming preliminary to the input, length of exposure, competing sensory input, use of specialized content messages, tolerance for unrealistic experience, and masculinity-feminity. A test-retest design was typically employed that involved measuring the baseline Barrier score with the Holtzman bolts and then ascertaining the Barrier change when responding to a second series of Holtzman blots at the same time that subliminal input was occurring. Complex results emerged that defined in considerably new detail what facilitates and blocks the boundary-disrupting effects of subliminal messages in men and to a lesser degree in women.
Between-User Reliability of Tier 1 Exposure Assessment Tools Used Under REACH.
Lamb, Judith; Galea, Karen S; Miller, Brian G; Hesse, Susanne; Van Tongeren, Martie
2017-10-01
When applying simple screening (Tier 1) tools to estimate exposure to chemicals in a given exposure situation under the Registration, Evaluation, Authorisation and restriction of CHemicals Regulation 2006 (REACH), users must select from several possible input parameters. Previous studies have suggested that results from exposure assessments using expert judgement and from the use of modelling tools can vary considerably between assessors. This study aimed to investigate the between-user reliability of Tier 1 tools. A remote-completion exercise and in person workshop were used to identify and evaluate tool parameters and factors such as user demographics that may be potentially associated with between-user variability. Participants (N = 146) generated dermal and inhalation exposure estimates (N = 4066) from specified workplace descriptions ('exposure situations') and Tier 1 tool combinations (N = 20). Interactions between users, tools, and situations were investigated and described. Systematic variation associated with individual users was minor compared with random between-user variation. Although variation was observed between choices made for the majority of input parameters, differing choices of Process Category ('PROC') code/activity descriptor and dustiness level impacted most on the resultant exposure estimates. Exposure estimates ranging over several orders of magnitude were generated for the same exposure situation by different tool users. Such unpredictable between-user variation will reduce consistency within REACH processes and could result in under-estimation or overestimation of exposure, risking worker ill-health or the implementation of unnecessary risk controls, respectively. Implementation of additional support and quality control systems for all tool users is needed to reduce between-assessor variation and so ensure both the protection of worker health and avoidance of unnecessary business risk management expenditure. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Use-Exposure Relationships of Pesticides for Aquatic Risk Assessment
Luo, Yuzhou; Spurlock, Frank; Deng, Xin; Gill, Sheryl; Goh, Kean
2011-01-01
Field-scale environmental models have been widely used in aquatic exposure assessments of pesticides. Those models usually require a large set of input parameters and separate simulations for each pesticide in evaluation. In this study, a simple use-exposure relationship is developed based on regression analysis of stochastic simulation results generated from the Pesticide Root-Zone Model (PRZM). The developed mathematical relationship estimates edge-of-field peak concentrations of pesticides from aerobic soil metabolism half-life (AERO), organic carbon-normalized soil sorption coefficient (KOC), and application rate (RATE). In a case study of California crop scenarios, the relationships explained 90–95% of the variances in the peak concentrations of dissolved pesticides as predicted by PRZM simulations for a 30-year period. KOC was identified as the governing parameter in determining the relative magnitudes of pesticide exposures in a given crop scenario. The results of model application also indicated that the effects of chemical fate processes such as partitioning and degradation on pesticide exposure were similar among crop scenarios, while the cross-scenario variations were mainly associated with the landscape characteristics, such as organic carbon contents and curve numbers. With a minimum set of input data, the use-exposure relationships proposed in this study could be used in screening procedures for potential water quality impacts from the off-site movement of pesticides. PMID:21483772
Petroleum Release Assessment and Impacts of Weather Extremes
Contaminated ground water and vapor intrusion are two major exposure pathways of concern at petroleum release sites. EPA has recently developed a model for petroleum vapor intrusion, called PVIScreen, which incorporates variability and uncertainty in input parameters. This ap...
USDA-ARS?s Scientific Manuscript database
The USEPA Office of Pesticide Programs (OPP) reviewed most of its human and ecological exposure assessment models for conventional pesticides to evaluate which inputs and parameters may be affected by changing climate conditions. To illustrate the approach used for considering potential effects of c...
FACTORS INFLUENCING TOTAL DIETARY EXPOSURE OF YOUNG CHILDREN
A deterministic model was developed to identify critical input parameters to assess dietary intake of young children. The model was used as a framework for understanding important factors in data collection and analysis. Factors incorporated included transfer efficiencies of pest...
FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN
A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...
Cancer Risk Assessment in Welder's Under Different Exposure Scenarios.
Barkhordari, Abolfazl; Zare Sakhvidi, Mohammad Javad; Zare Sakhvidi, Fariba; Halvani, Gholamhossein; Firoozichahak, Ali; Shirali, GholamAbbas
2014-05-01
Welders exposure to nickel and hexavalent chromium in welding fumes is associated with increase of cancer risk in welders. In this study we calculated cancer risk due to exposure to these compounds in welders. The role of exposure parameters in welders on derived incremental lifetime cancer risk were determined by stochastic modeling of cancer risk. Input parameters were determined by field investigation in Iranian welders in 2013 and literature review. The 90% upper band cancer risk due to hexavalent chromium and nickel exposure was in the range of 6.03E-03 to 2.12E-02 and 7.18E-03 to 2.61E-02 respectively. Scenario analysis showed that asthmatic and project welders are significantly at higher cancer risk in comparison with other welders (P<0.05). Shift duration was responsible for 37% and 33% of variances for hexavalent chromium and nickel respectively. Welders are at high and unacceptable risk of cancer. Control measures according to scenario analysis findings are advisable.
Terrestrial Microcosm Evaluation of Two Army Smoke-Producing Compounds.
1988-01-29
a greenhouse under natural or controlled photoperiods (depending on the time of year) with rainfall input simulated. Parameters monitored S ’a. ’ ’a...Sixty intact soil-core microcosms that had been extracted from an undisturbed (for m. iy years) field site were set up in a greenhouse under strict...tests. The 60 cures were divided equally between two greenhouse bays, 30 cores for exposure to RP/BR and 30 cores for exposure to WP. Within each group
Preliminary calculation of solar cosmic ray dose to the female breast in space mission
NASA Technical Reports Server (NTRS)
Shavers, Mark; Poston, John W.; Atwell, William; Hardy, Alva C.; Wilson, John W.
1991-01-01
No regulatory dose limits are specifically assigned for the radiation exposure of female breasts during manned space flight. However, the relatively high radiosensitivity of the glandular tissue of the breasts and its potential exposure to solar flare protons on short- and long-term missions mandate a priori estimation of the associated risks. A model for estimating exposure within the breast is developed for use in future NASA missions. The female breast and torso geometry is represented by a simple interim model. A recently developed proton dose-buildup procedure is used for estimating doses. The model considers geomagnetic shielding, magnetic-storm conditions, spacecraft shielding, and body self-shielding. Inputs to the model include proton energy spectra, spacecraft orbital parameters, STS orbiter-shielding distribution at a given position, and a single parameter allowing for variation in breast size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demou, Evangelia; Hellweg, Stefanie; Wilson, Michael P.
2009-05-01
We evaluated three exposure models with data obtained from measurements among workers who use"aerosol" solvent products in the vehicle repair industry and with field experiments using these products to simulate the same exposure conditions. The three exposure models were the: 1) homogeneously-mixed-one-box model, 2) multi-zone model, and 3) eddy-diffusion model. Temporally differentiated real-time breathing zone volatile organic compound (VOC) concentration measurements, integrated far-field area samples, and simulated experiments were used in estimating parameters, such as emission rates, diffusivity, and near-field dimensions. We assessed differences in model input requirements and their efficacy for predictive modeling. The One-box model was not ablemore » to resemble the temporal profile of exposure concentrations, but it performed well concerning time-weighted exposure over extended time periods. However, this model required an adjustment for spatial concentration gradients. Multi-zone models and diffusion-models may solve this problem. However, we found that the reliable use of both these models requires extensive field data to appropriately define pivotal parameters such as diffusivity or near-field dimensions. We conclude that it is difficult to apply these models for predicting VOC exposures in the workplace. However, for comparative exposure scenarios in life-cycle assessment they may be useful.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Rana, V
2015-06-15
Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less
Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio
2016-11-01
The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.
Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking
NASA Technical Reports Server (NTRS)
Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward
2011-01-01
To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.
Padilla, Lauren; Winchell, Michael; Peranginangin, Natalia; Grant, Shanique
2017-11-01
Wheat crops and the major wheat-growing regions of the United States are not included in the 6 crop- and region-specific scenarios developed by the US Environmental Protection Agency (USEPA) for exposure modeling with the Pesticide Root Zone Model conceptualized for groundwater (PRZM-GW). The present work augments the current scenarios by defining appropriately vulnerable PRZM-GW scenarios for high-producing spring and winter wheat-growing regions that are appropriate for use in refined pesticide exposure assessments. Initial screening-level modeling was conducted for all wheat areas across the conterminous United States as defined by multiple years of the Cropland Data Layer land-use data set. Soil, weather, groundwater temperature, evaporation depth, and crop growth and management practices were characterized for each wheat area from publicly and nationally available data sets and converted to input parameters for PRZM. Approximately 150 000 unique combinations of weather, soil, and input parameters were simulated with PRZM for an herbicide applied for postemergence weed control in wheat. The resulting postbreakthrough average herbicide concentrations in a theoretical shallow aquifer were ranked to identify states with the largest regions of relatively vulnerable wheat areas. For these states, input parameters resulting in near 90 th percentile postbreakthrough average concentrations corresponding to significant wheat areas with shallow depth to groundwater formed the basis for 4 new spring wheat scenarios and 4 new winter wheat scenarios to be used in PRZM-GW simulations. Spring wheat scenarios were identified in North Dakota, Montana, Washington, and Texas. Winter wheat scenarios were identified in Oklahoma, Texas, Kansas, and Colorado. Compared to the USEPA's original 6 scenarios, postbreakthrough average herbicide concentrations in the new scenarios were lower than all but Florida Potato and Georgia Coastal Peanuts of the original scenarios and better represented regions dominated by wheat crops. Integr Environ Assess Manag 2017;13:992-1006. © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
van der Velde-Koerts, Trijntje; Breysse, Nicolas; Pattingre, Lauriane; Hamey, Paul Y; Lutze, Jason; Mahieu, Karin; Margerison, Sam; Ossendorp, Bernadette C; Reich, Hermine; Rietveld, Anton; Sarda, Xavier; Vial, Gaelle; Sieke, Christian
2018-06-03
In 2015 a scientific workshop was held in Geneva, where updating the International Estimate of Short-Term Intake (IESTI) equations was suggested. This paper studies the effects of the proposed changes in residue inputs, large portions, variability factors and unit weights on the overall short-term dietary exposure estimate. Depending on the IESTI case equation, a median increase in estimated overall exposure by a factor of 1.0-6.8 was observed when the current IESTI equations are replaced by the proposed IESTI equations. The highest increase in the estimated exposure arises from the replacement of the median residue (STMR) by the maximum residue limit (MRL) for bulked and blended commodities (case 3 equations). The change in large portion parameter does not have a significant impact on the estimated exposure. The use of large portions derived from the general population covering all age groups and bodyweights should be avoided when large portions are not expressed on an individual bodyweight basis. Replacement of the highest residue (HR) by the MRL and removal of the unit weight each increase the estimated exposure for small-, medium- and large-sized commodities (case 1, case 2a or case 2b equations). However, within the EU framework lowering of the variability factor from 7 or 5 to 3 counterbalances the effect of changes in other parameters, resulting in an estimated overall exposure change for the EU situation of a factor of 0.87-1.7 and 0.6-1.4 for IESTI case 2a and case 2b equations, respectively.
The STScI STIS Pipeline V: Cosmic Ray Rejection
NASA Astrophysics Data System (ADS)
Baum, Stefi; Hsu, J. C.; Hodge, Phil; Ferguson, Harry
1996-07-01
In this ISR we describe calstis-2, the calstis calibration module which combines CRSPLIT exposures to produce a single cosmic ray rejected image. Cosmic ray rejection in the STIS pipeline will follow the same basic philosophy as does the STSDAS task crrej - a series of separate CRSPLIT exposures are combined to produce a single summed image, where discrepant (different by some number of sigma from the guess value) are discarded in forming the output image. The calstis pipeline is able to perform this cosmic ray rejection because the individually commanded exposures are associated together into a single dataset by TRANS and generic conversion. The crrej will also exist as a task in STSDAS to allow users to reperform the cosmic ray rejection, altering the input parameters.
Prioritizing Risks and Uncertainties from Intentional Release of Selected Category A Pathogens
Hong, Tao; Gurian, Patrick L.; Huang, Yin; Haas, Charles N.
2012-01-01
This paper synthesizes available information on five Category A pathogens (Bacillus anthracis, Yersinia pestis, Francisella tularensis, Variola major and Lassa) to develop quantitative guidelines for how environmental pathogen concentrations may be related to human health risk in an indoor environment. An integrated model of environmental transport and human health exposure to biological pathogens is constructed which 1) includes the effects of environmental attenuation, 2) considers fomite contact exposure as well as inhalational exposure, and 3) includes an uncertainty analysis to identify key input uncertainties, which may inform future research directions. The findings provide a framework for developing the many different environmental standards that are needed for making risk-informed response decisions, such as when prophylactic antibiotics should be distributed, and whether or not a contaminated area should be cleaned up. The approach is based on the assumption of uniform mixing in environmental compartments and is thus applicable to areas sufficiently removed in time and space from the initial release that mixing has produced relatively uniform concentrations. Results indicate that when pathogens are released into the air, risk from inhalation is the main component of the overall risk, while risk from ingestion (dermal contact for B. anthracis) is the main component of the overall risk when pathogens are present on surfaces. Concentrations sampled from untracked floor, walls and the filter of heating ventilation and air conditioning (HVAC) system are proposed as indicators of previous exposure risk, while samples taken from touched surfaces are proposed as indicators of future risk if the building is reoccupied. A Monte Carlo uncertainty analysis is conducted and input-output correlations used to identify important parameter uncertainties. An approach is proposed for integrating these quantitative assessments of parameter uncertainty with broader, qualitative considerations to identify future research priorities. PMID:22412915
Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minter, K.; Jannik, T.
LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© codemore » and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).« less
Informing Selection of Nanomaterial Concentrations for ...
Little justification is generally provided for selection of in vitro assay testing concentrations for engineered nanomaterials (ENMs). Selection of concentration levels for hazard evaluation based on real-world exposure scenarios is desirable. We reviewed published ENM concentrations measured in air in manufacturing and R&D labs to identify input levels for estimating ENM mass retained in the human lung using the Multiple-Path Particle Dosimetry (MPPD) model. Model input parameters were individually varied to estimate alveolar mass retained for different particle sizes (5-1000 nm), aerosol concentrations (0.1, 1 mg/m3), aspect ratios (2, 4, 10, 167), and exposure durations (24 hours and a working lifetime). The calculated lung surface concentrations were then converted to in vitro solution concentrations. Modeled alveolar mass retained after 24 hours is most affected by activity level and aerosol concentration. Alveolar retention for Ag and TiO2 nanoparticles and CNTs for a working lifetime (45 years) exposure duration is similar to high-end concentrations (~ 30-400 μg/mL) typical of in vitro testing reported in the literature. Analyses performed are generally applicable to provide ENM testing concentrations for in vitro hazard screening studies though further research is needed to improve the approach. Understanding the relationship between potential real-world exposures and in vitro test concentrations will facilitate interpretation of toxicological results
Huizer, Daan; Huijbregts, Mark A J; van Rooij, Joost G M; Ragas, Ad M J
2014-08-01
The coherence between occupational exposure limits (OELs) and their corresponding biological limit values (BLVs) was evaluated for 2-propanol and acetone. A generic human PBPK model was used to predict internal concentrations after inhalation exposure at the level of the OEL. The fraction of workers with predicted internal concentrations lower than the BLV, i.e. the 'false negatives', was taken as a measure for incoherence. The impact of variability and uncertainty in input parameters was separated by means of nested Monte Carlo simulation. Depending on the exposure scenario considered, the median fraction of the population for which the limit values were incoherent ranged from 2% to 45%. Parameter importance analysis showed that body weight was the main factor contributing to interindividual variability in blood and urine concentrations and that the metabolic parameters Vmax and Km were the most important sources of uncertainty. This study demonstrates that the OELs and BLVs for 2-propanol and acetone are not fully coherent, i.e. enforcement of BLVs may result in OELs being violated. In order to assess the acceptability of this "incoherence", a maximum population fraction at risk of exceeding the OEL should be specified as well as a minimum level of certainty in predicting this fraction. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhou, Bin; Zhao, Bin
2014-01-01
It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
van der Voort, M; Van Meensel, J; Lauwers, L; Van Huylenbroeck, G; Charlier, J
2016-02-01
Efficiency analysis is used for assessing links between technical efficiency (TE) of livestock farms and animal diseases. However, previous studies often do not make the link with the allocation of inputs and mainly present average effects that ignore the often huge differences among farms. In this paper, we studied the relationship between exposure to gastrointestinal (GI) nematode infections, the TE and the input allocation on dairy farms. Although the traditional cost allocative efficiency (CAE) indicator adequately measures how a given input allocation differs from the cost-minimising input allocation, they do not represent the unique input allocation of farms. Similar CAE scores may be obtained for farms with different input allocations. Therefore, we propose an adjusted allocative efficiency index (AAEI) to measure the unique input allocation of farms. Combining this AAEI with the TE score allows determining the unique input-output position of each farm. The method is illustrated by estimating efficiency scores using data envelopment analysis (DEA) on a sample of 152 dairy farms in Flanders for which both accountancy and parasitic monitoring data were available. Three groups of farms with a different input-output position can be distinguished based on cluster analysis: (1) technically inefficient farms, with a relatively low use of concentrates per 100 l milk and a high exposure to infection, (2) farms with an intermediate TE, relatively high use of concentrates per 100 l milk and a low exposure to infection, (3) farms with the highest TE, relatively low roughage use per 100 l milk and a relatively high exposure to infection. Correlation analysis indicates for each group how the level of exposure to GI nematodes is associated or not with improved economic performance. The results suggest that improving both the economic performance and exposure to infection seems only of interest for highly TE farms. The findings indicate that current farm recommendations regarding GI nematode infections could be improved by also accounting for the allocation of inputs on the farm.
Estimates of galactic cosmic ray shielding requirements during solar minimum
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.
1990-01-01
Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.S.; Gmur, N.F.; Da Costa, F.
1977-08-01
Initial injury to adaxial leaf surfaces of Phaseolus vulgaris and Helianthus annuus occurred near trichomes and stomata after exposure to simulated sulfate acid rain. Lesion frequency was not correlated with density of either stomata or trichomes but was correlated with degree of leaf expansion. The number of lesions per unit area increased with total leaf area. Results suggest that characteristics of the leaf indumentum such as development of trichomes and guard cells and/or cuticle thickness near these structures may be involved in lesion development. Adaxial epidermal cell collapse was the first event in lesion development. Palisade cells and eventually spongymore » mesophyll cells collapsed after continued, daily exposure to simulated rain of low pH. Lesion development on Phaseolus vulgaris followed a specific course of events after exposure to simulated rain of known composition, application rate, drop size frequency, drop velocities, and frequency of exposures. These results allow development of further experiments to observe accurately other parameters, such as nutrient inputs and nutrient leaching from foliage, after exposure to simulated sulfate acid rain.« less
Space Radiation and Manned Mission: Interface Between Physics and Biology
NASA Astrophysics Data System (ADS)
Hei, Tom
2012-07-01
The natural radiation environment in space consists of a mixed field of high energy protons, heavy ions, electrons and alpha particles. Interplanetary travel to the International Space Station and any planned establishment of satellite colonies on other solar system implies radiation exposure to the crew and is a major concern to space agencies. With shielding, the radiation exposure level in manned space missions is likely to be chronic, low dose irradiation. Traditionally, our knowledge of biological effects of cosmic radiation in deep space is almost exclusively derived from ground-based accelerator experiments with heavy ions in animal or in vitro models. Radiobiological effects of low doses of ionizing radiation are subjected to modulations by various parameters including bystander effects, adaptive response, genomic instability and genetic susceptibility of the exposed individuals. Radiation dosimetry and modeling will provide conformational input in areas where data are difficult to acquire experimentally. However, modeling is only as good as the quality of input data. This lecture will discuss the interdependent nature of physics and biology in assessing the radiobiological response to space radiation.
Zhou, Bin; Zhao, Bin
2014-01-01
It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making. PMID:24416436
Listen, Listen, Listen and Listen: Building a Comprehension Corpus and Making It Comprehensible
ERIC Educational Resources Information Center
Mordaunt, Owen G.; Olson, Daniel W.
2010-01-01
Listening comprehension input is necessary for language learning and acculturation. One approach to developing listening comprehension skills is through exposure to massive amounts of naturally occurring spoken language input. But exposure to this input is not enough; learners also need to make the comprehension corpus meaningful to their learning…
Kumblad, L; Kautsky, U; Naeslund, B
2006-01-01
In safety assessments of nuclear facilities, a wide range of radioactive isotopes and their potential hazard to a large assortment of organisms and ecosystem types over long time scales need to be considered. Models used for these purposes have typically employed approaches based on generic reference organisms, stylised environments and transfer functions for biological uptake exclusively based on bioconcentration factors (BCFs). These models are of non-mechanistic nature and involve no understanding of uptake and transport processes in the environment, which is a severe limitation when assessing real ecosystems. In this paper, ecosystem models are suggested as a method to include site-specific data and to facilitate the modelling of dynamic systems. An aquatic ecosystem model for the environmental transport of radionuclides is presented and discussed. With this model, driven and constrained by site-specific carbon dynamics and three radionuclide specific mechanisms: (i) radionuclide uptake by plants, (ii) excretion by animals, and (iii) adsorption to organic surfaces, it was possible to estimate the radionuclide concentrations in all components of the modelled ecosystem with only two radionuclide specific input parameters (BCF for plants and Kd). The importance of radionuclide specific mechanisms for the exposure to organisms was examined, and probabilistic and sensitivity analyses to assess the uncertainties related to ecosystem input parameters were performed. Verification of the model suggests that this model produces analogous results to empirically derived data for more than 20 different radionuclides.
NASA Astrophysics Data System (ADS)
Stenemo, Fredrik; Lindahl, Anna M. L.; Gärdenäs, Annemieke; Jarvis, Nicholas
2007-08-01
Several simple index methods that use easily accessible data have been developed and included in decision-support systems to estimate pesticide leaching across larger areas. However, these methods often lack important process descriptions (e.g. macropore flow), which brings into question their reliability. Descriptions of macropore flow have been included in simulation models, but these are too complex and demanding for spatial applications. To resolve this dilemma, a neural network simulation meta-model of the dual-permeability macropore flow model MACRO was created for pesticide groundwater exposure assessment. The model was parameterized using pedotransfer functions that require as input the clay and sand content of the topsoil and subsoil, and the topsoil organic carbon content. The meta-model also requires the topsoil pesticide half-life and the soil organic carbon sorption coefficient as input. A fully connected feed-forward multilayer perceptron classification network with two hidden layers, linked to fully connected feed-forward multilayer perceptron neural networks with one hidden layer, trained on sub-sets of the target variable, was shown to be a suitable meta-model for the intended purpose. A Fourier amplitude sensitivity test showed that the model output (the 80th percentile average yearly pesticide concentration at 1 m depth for a 20 year simulation period) was sensitive to all input parameters. The two input parameters related to pesticide characteristics (i.e. soil organic carbon sorption coefficient and topsoil pesticide half-life) were the most influential, but texture in the topsoil was also quite important since it was assumed to control the mass exchange coefficient that regulates the strength of macropore flow. This is in contrast to models based on the advection-dispersion equation where soil texture is relatively unimportant. The use of the meta-model is exemplified with a case-study where the spatial variability of pesticide leaching is mapped for a small field. It was shown that the area of the field that contributes most to leaching depends on the properties of the compound in question. It is concluded that the simulation meta-model of MACRO should prove useful for mapping relative pesticide leaching risks at large scales.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan
NASA Astrophysics Data System (ADS)
Milando, Chad W.; Batterman, Stuart A.
2018-05-01
The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Residential magnetic fields predicted from wiring configurations: I. Exposure model.
Bowman, J D; Thomas, D C; Jiang, L; Jiang, F; Peters, J M
1999-10-01
A physically based model for residential magnetic fields from electric transmission and distribution wiring was developed to reanalyze the Los Angeles study of childhood leukemia by London et al. For this exposure model, magnetic field measurements were fitted to a function of wire configuration attributes that was derived from a multipole expansion of the Law of Biot and Savart. The model parameters were determined by nonlinear regression techniques, using wiring data, distances, and the geometric mean of the ELF magnetic field magnitude from 24-h bedroom measurements taken at 288 homes during the epidemiologic study. The best fit to the measurement data was obtained with separate models for the two major utilities serving Los Angeles County. This model's predictions produced a correlation of 0.40 with the measured fields, an improvement on the 0.27 correlation obtained with the Wertheimer-Leeper (WL) wire code. For the leukemia risk analysis in a companion paper, the regression model predicts exposures to the 24-h geometric mean of the ELF magnetic fields in Los Angeles homes where only wiring data and distances have been obtained. Since these input parameters for the exposure model usually do not change for many years, the predicted magnetic fields will be stable over long time periods, just like the WL code. If the geometric mean is not the exposure metric associated with cancer, this regression technique could be used to estimate long-term exposures to temporal variability metrics and other characteristics of the ELF magnetic field which may be cancer risk factors.
ERIC Educational Resources Information Center
Prévost, Philippe; Strik, Nelleke; Tuller, Laurie
2014-01-01
This study investigates how derivational complexity interacts with first language (L1) properties, second language (L2) input, age of first exposure to the target language, and length of exposure in child L2 acquisition. We compared elicited production of "wh"-questions in French in two groups of 15 participants each, one with L1 English…
Customized Corneal Cross-Linking-A Mathematical Model.
Caruso, Ciro; Epstein, Robert L; Ostacolo, Carmine; Pacente, Luigi; Troisi, Salvatore; Barbaro, Gaetano
2017-05-01
To improve the safety, reproducibility, and depth of effect of corneal cross-linking with the ultraviolet A (UV-A) exposure time and fluence customized according to the corneal thickness. Twelve human corneas were used for the experimental protocol. They were soaked using a transepithelial (EPI-ON) technique using riboflavin with the permeation enhancer vitamin E-tocopheryl polyethylene glycol succinate. The corneas were then placed on microscope slides and irradiated at 3 mW/cm for 30 minutes. The UV-A output parameters were measured to build a new equation describing the time-dependent loss of endothelial protection induced by riboflavin during cross-linking, as well as a pachymetry-dependent and exposure time-dependent prescription for input UV-A fluence. The proposed equation was used to establish graphs prescribing the maximum UV-A fluence input versus exposure time that always maintains corneal endothelium exposure below toxicity limits. Analysis modifying the Lambert-Beer law for riboflavin oxidation leads to graphs of the maximum safe level of UV-A radiation fluence versus the time applied and thickness of the treated cornea. These graphs prescribe UV-A fluence levels below 1.8 mW/cm for corneas of thickness 540 μm down to 1.2 mW/cm for corneas of thickness 350 μm. Irradiation times are typically below 15 minutes. The experimental and mathematical analyses establish the basis for graphs that prescribe maximum safe fluence and UV-A exposure time for corneas of different thicknesses. Because this clinically tested protocol specifies a corneal surface clear of shielding riboflavin on the corneal surface during UV-A irradiation, it allows for shorter UV-A irradiation time and lower fluence than in the Dresden protocol.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, C; Dave, J
Purpose: To evaluate implementation of AAPM TG-150’s draft recommendations via a parameter study for testing the performance of digital image receptors. Methods: Flat field images were acquired from 9 calibrated digital image receptors associated with 9 new portable digital radiography systems (Carestream Health, Inc.) based on the draft recommendations and manufacturer-specified calibration conditions (set of 4 images at input detector air kerma ranging from 1 to 25 µGy). Effects of exposure response function (linearized and logarithmic), ‘Presentation Intent Type’ (‘For Processing’ and ‘For Presentation’), detector orientation with respect to the anode-cathode axis (4 orientations; 900 rotations per iteration), different ROImore » sizes (5×5–40×40 mm{sup 2}) and elimination of varying dimensions of image border (0 mm i.e., without boundary elimination to 150 mm) on signal, noise, signal-to-noise ratio (SNR) and the associated nonuniformities were evaluated. Images were analyzed in Matlab and quantities were compared using ANOVA. Results: Signal, noise and SNR values averaged over 9 systems with default parameter values in draft recommendations were 4837.2±139.4, 19.7±0.9 and 246.4±10.1 (mean ± standard deviation), respectively (at input detector air kerma: 12.5 µGy). Signal, noise and SNR showed characteristic dependency on exposure response function and on ‘Presentation Intent Type’. These values were not affected by ROI size and detector orientation, but analysis showed that eliminating the edge pixels along the boundary was required for the noise parameter (coefficient of variation range for noise: 72%–106% and 3%–4% without and with boundary elimination; respectively). Local and global nonuniformities showed a similar dependence on the need for boundary elimination. Interestingly, computed non-uniformities showed agreement with manufacturer-reported values except for noise non-uniformities in two units; artifacts were seen in images from these two units highlighting the importance of independent evaluations. Conclusion: The effect of different parameters on performance characterization of digital image receptors was evaluated based on TG-150’s draft recommendations.« less
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Dallas, Lorna J; Devos, Alexandre; Fievet, Bruno; Turner, Andrew; Lyons, Brett P; Jha, Awadhesh N
2016-05-01
Accurate dosimetry is critically important for ecotoxicological and radioecological studies on the potential effects of environmentally relevant radionuclides, such as tritium ((3)H). Previous studies have used basic dosimetric equations to estimate dose from (3)H exposure in ecologically important organisms, such as marine mussels. This study compares four different methods of estimating dose to adult mussels exposed to 1 or 15 MBq L(-1) tritiated water (HTO) under laboratory conditions. These methods were (1) an equation converting seawater activity concentrations to dose rate with fixed parameters; (2) input into the ERICA tool of seawater activity concentrations only; (3) input into the ERICA tool of estimated whole organism concentrations (woTACs), comprising dry activity plus estimated tissue free water tritium (TFWT) activity (TFWT volume × seawater activity concentration); and (4) input into the ERICA tool of measured whole organism activity concentrations, comprising dry activity plus measured TFWT activity (TFWT volume × TFWT activity concentration). Methods 3 and 4 are recommended for future ecotoxicological experiments as they produce values for individual animals and are not reliant on transfer predictions (estimation of concentration ratio). Method 1 may be suitable if measured whole organism concentrations are not available, as it produced results between 3 and 4. As there are technical complications to accurately measuring TFWT, we recommend that future radiotoxicological studies on mussels or other aquatic invertebrates measure whole organism activity in non-dried tissues (i.e. incorporating TFWT and dry activity as one, rather than as separate fractions) and input this data into the ERICA tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS
In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...
Ashworth, Danielle C.; Fuller, Gary W.; Toledano, Mireille B.; Font, Anna; Elliott, Paul; Hansell, Anna L.; de Hoogh, Kees
2013-01-01
Background. Research to date on health effects associated with incineration has found limited evidence of health risks, but many previous studies have been constrained by poor exposure assessment. This paper provides a comparative assessment of atmospheric dispersion modelling and distance from source (a commonly used proxy for exposure) as exposure assessment methods for pollutants released from incinerators. Methods. Distance from source and the atmospheric dispersion model ADMS-Urban were used to characterise ambient exposures to particulates from two municipal solid waste incinerators (MSWIs) in the UK. Additionally an exploration of the sensitivity of the dispersion model simulations to input parameters was performed. Results. The model output indicated extremely low ground level concentrations of PM10, with maximum concentrations of <0.01 μg/m3. Proximity and modelled PM10 concentrations for both MSWIs at postcode level were highly correlated when using continuous measures (Spearman correlation coefficients ~ 0.7) but showed poor agreement for categorical measures (deciles or quintiles, Cohen's kappa coefficients ≤ 0.5). Conclusion. To provide the most appropriate estimate of ambient exposure from MSWIs, it is essential that incinerator characteristics, magnitude of emissions, and surrounding meteorological and topographical conditions are considered. Reducing exposure misclassification is particularly important in environmental epidemiology to aid detection of low-level risks. PMID:23935644
NASA Astrophysics Data System (ADS)
Gmuender, T.
2017-02-01
Different chemical photo-reactive emulsions are used in screen printing for stencil production. Depending on the bandwidth, optical power and depth of field from the optical system, the reaction / exposure speed has a diverse value. In this paper, the emulsions get categorized and validated in a first step. After that a mathematical model gets developed and adapted due to heuristic experience to estimate the exposure speed under the influence of digitally modulated ultra violet (UV) light. The main intention is to use the technical specifications (intended wavelength, exposure time, distance to the stencil, electrical power, stencil configuration) in the emulsion data sheet primary written down with an uncertainty factor for the end user operating with large projector arc lamps and photo films. These five parameters are the inputs for a mathematical formula which gives as an output the exposure speed for the Computer to Screen (CTS) machine calculated for each emulsion / stencil setup. The importance of this work relies in the possibility to rate with just a few boundaries the performance and capacity of an exposure system used in screen printing instead of processing a long test series for each emulsion / stencil configuration.
Optimizing noise control strategy in a forging workshop.
Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal
2014-01-01
In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Alexander, Joshua M.
2016-01-01
By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%–50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens. PMID:26936574
NASA Astrophysics Data System (ADS)
Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John
2011-12-01
Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.
Parameterization models for pesticide exposure via crop consumption.
Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier
2012-12-04
An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schalk, W.W. III
Early actions of emergency responders during hazardous material releases are intended to assess contamination and potential public exposure. As measurements are collected, an integration of model calculations and measurements can assist to better understand the situation. This study applied a high resolution version of the operational 3-D numerical models used by Lawrence Livermore National Laboratory to a limited meteorological and tracer data set to assist in the interpretation of the dispersion pattern on a 140 km scale. The data set was collected from a tracer release during the morning surface inversion and transition period in the complex terrain of themore » Snake River Plain near Idaho Falls, Idaho in November 1993 by the United States Air Force. Sensitivity studies were conducted to determine model input parameters that best represented the study environment. These studies showed that mixing and boundary layer heights, atmospheric stability, and rawinsonde data are the most important model input parameters affecting wind field generation and tracer dispersion. Numerical models and limited measurement data were used to interpret dispersion patterns through the use of data analysis, model input determination, and sensitivity studies. Comparison of the best-estimate calculation to measurement data showed that model results compared well with the aircraft data, but had moderate success with the few surface measurements taken. The moderate success of the surface measurement comparison, may be due to limited downward mixing of the tracer as a result of the model resolution determined by the domain size selected to study the overall plume dispersion. 8 refs., 40 figs., 7 tabs.« less
Using gridded multimedia model to simulate spatial fate of Benzo[α]pyrene on regional scale.
Liu, Shijie; Lu, Yonglong; Wang, Tieyu; Xie, Shuangwei; Jones, Kevin C; Sweetman, Andrew J
2014-02-01
Predicting the environmental multimedia fate is an essential step in the process of assessing the human exposure and health impacts of chemicals released into the environment. Multimedia fate models have been widely applied to calculate the fate and distribution of chemicals in the environment, which can serve as input to a human exposure model. In this study, a grid based multimedia fugacity model at regional scale was developed together with a case study modeling the fate and transfer of Benzo[α]pyrene (BaP) in Bohai coastal region, China. Based on the estimated emission and in-site survey in 2008, the BaP concentrations in air, vegetation, soil, fresh water, fresh water sediment and coastal water as well as the transfer fluxes were derived under the steady-state assumption. The model results were validated through comparison between the measured and modeled concentrations of BaP. The model results indicated that the predicted concentrations of BaP in air, fresh water, soil and sediment generally agreed with field observations. Model predictions suggest that soil was the dominant sink of BaP in terrestrial systems. Flow from air to soil, vegetation and costal water were three major pathways of BaP inter-media transport processes. Most of the BaP entering the sea was transferred by air flow, which was also the crucial driving force in the spatial distribution processes of BaP. The Yellow River, Liaohe River and Daliao River played an important role in the spatial transformation processes of BaP. Compared with advection outflow, degradation was more important in removal processes of BaP. Sensitivities of the model estimates to input parameters were tested. The result showed that emission rates, compartment dimensions, transport velocity and degradation rates of BaP were the most influential parameters for the model output. Monte Carlo simulation was carried out to determine parameter uncertainty, from which the coefficients of variation for the estimated BaP concentrations in air and soil were computed, which were 0.46 and 1.53, respectively. The model output-concentrations of BaP in multimedia environment can be used in human exposure and risk assessment in the Bohai coastal region. The results also provide significant indicators on the likely dominant fate, influence range of emission and transport processes determining behavior of BaP in the Bohai coastal region, which is instrumental in human exposure and risk assessment in the region. © 2013.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel
2014-11-01
With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
NASA Astrophysics Data System (ADS)
Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao
2011-03-01
A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Thors, B; Hansson, B; Törnevik, C
2009-07-07
In this paper, a procedure is proposed for generating simple and practical compliance boundaries for mobile communication base station antennas. The procedure is based on a set of formulae for estimating the specific absorption rate (SAR) in certain directions around a class of common base station antennas. The formulae, given for both whole-body and localized SAR, require as input the frequency, the transmitted power and knowledge of antenna-related parameters such as dimensions, directivity and half-power beamwidths. With knowledge of the SAR in three key directions it is demonstrated how simple and practical compliance boundaries can be generated outside of which the exposure levels do not exceed certain limit values. The conservativeness of the proposed procedure is discussed based on results from numerical radio frequency (RF) exposure simulations with human body phantoms from the recently developed Virtual Family.
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
Consumer behaviour survey for assessing exposure from consumer products: a feasibility study.
Schneider, Klaus; Recke, Selina; Kaiser, Eva; Götte, Sebastian; Berkefeld, Henrike; Lässig, Juliane; Rüdiger, Thomas; Lindtner, Oliver; Oltmanns, Jan
2018-05-23
Evaluating chemical exposures from consumer products is an essential part of chemical safety assessments under REACH and may also be important to demonstrate compliance with consumer product legislation. Modelling of consumer exposure needs input information on the substance (e.g. vapour pressure), the product(s) containing the substance (e.g. concentration) and on consumer behaviour (e.g. use frequency and amount of product used). This feasibility study in Germany investigated methods for conducting a consumer survey in order to identify and retrieve information on frequency, duration, use amounts and use conditions for six example product types (four mixtures, two articles): hand dishwashing liquid, cockpit spray, fillers, paints and lacquers, shoes made of rubber or plastic, and ball-pens/pencils. Retrospective questionnaire methods (Consumer Product Questionnaire (CPQ), and Recall-Foresight Questionnaire (RFQ)) as well as protocol methods (written reporting by participants and video documentation) were used. A combination of retrospective questionnaire and written protocol methods was identified to provide valid information in a resource-efficient way. Relevant information, which can readily be used in exposure modelling, was obtained for all parameters and product types investigated. Based on the observations in this feasibility study, recommendations are given for designing a large consumer survey.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
History of Inuit Community Exposure to Lead, Cadmium, and Mercury in Sewage Lake Sediments
Hermanson, Mark H.; Brozowski, James R.
2005-01-01
Exposure to lead, cadmium, and mercury is known to be high in many arctic Inuit communities. These metals are emitted from industrial and urban sources, are distributed by long-range atmospheric transport to remote regions, and are found in Inuit country foods. Current community exposure to these metals can be measured in food, but feces and urine are also excellent indicators of total exposure from ingestion and inhalation because a high percentage of each metal is excreted. Bulk domestic sewage or its residue in a waste treatment system is a good substitute measure. Domestic waste treatment systems that accumulate metals in sediment provide an accurate historical record of changes in ingestion or inhalation. We collected sediment cores from an arctic lake used for facultative domestic sewage treatment to identify the history of community exposure to Pb, Cd, and Hg. Cores were dated and fluxes were measured for each metal. A nearby lake was sampled to measure combined background and atmospheric inputs, which were subtracted from sewage lake data. Pb, Cd, and Hg inputs from sewage grew rapidly after the onset of waste disposal in the late 1960s and exceeded the rate of population growth in the contributing community from 1970 to 1990. The daily per-person Pb input in 1990 (720,000 ng/person per day) exceeded the tolerable daily intake level. The Cd input (48,000 ng/person per day) and Hg input (19,000 ng/person per day) were below the respective TDI levels at the time. PMID:16203239
ERIC Educational Resources Information Center
Bedore, Lisa M.; Pena, Elizabeth D.; Griffin, Zenzi M.; Hixon, J. Gregory
2016-01-01
This study evaluates the effects of Age of Exposure to English (AoEE) and Current Input/Output on language performance in a cross-sectional sample of Spanish-English bilingual children. First- (N = 586) and third-graders (N = 298) who spanned a wide range of bilingual language experience participated. Parents and teachers provided information…
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Community-Based Decision-Making: Application of Web ...
Living, working, and going to school near roadways has been associated with a number of adverse health effects, including asthma exacerbation, cardiovascular impairment, and respiratory symptoms. In the United States, 30% - 45% of urban populations live or work in the near-road environment, with a greater percentage of minority and low-income residents living in areas with highly- trafficked roadways. Near-road studies typically use surrogates of exposure to evaluate potential causality of health effects, including proximity, traffic counts, or total length of roads within a given radius. In contrast, simplified models provide an opportunity to examine how changes in input parameters, such as vehicle counts or speeds, can affect air quality. Simplified or reduced-form models typically retain the same or similar algorithms most responsible for characterizing uncertainty in more sophisticated models. The Community Line Source modeling system (C-LINE) allows users to explore what-if scenarios such as increases in diesel trucks or total traffic; examine hot spot conditions and areas for further study; determine ideal monitor placement locations; or evaluate air quality changes due to traffic re-routing. This presentation describes the input parameters, analytical procedures, visualization routines, and software considerations for C-LINE, and an example application for Newport News, Virginia. Results include scenarios related to port development and resulting traffic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Modeled occupational exposures to gas-phase medical laser-generated air contaminants.
Lippert, Julia F; Lacey, Steven E; Jones, Rachael M
2014-01-01
Exposure monitoring data indicate the potential for substantive exposure to laser-generated air contaminants (LGAC); however the diversity of medical lasers and their applications limit generalization from direct workplace monitoring. Emission rates of seven previously reported gas-phase constituents of medical laser-generated air contaminants (LGAC) were determined experimentally and used in a semi-empirical two-zone model to estimate a range of plausible occupational exposures to health care staff. Single-source emission rates were generated in an emission chamber as a one-compartment mass balance model at steady-state. Clinical facility parameters such as room size and ventilation rate were based on standard ventilation and environmental conditions required for a laser surgical facility in compliance with regulatory agencies. All input variables in the model including point source emission rates were varied over an appropriate distribution in a Monte Carlo simulation to generate a range of time-weighted average (TWA) concentrations in the near and far field zones of the room in a conservative approach inclusive of all contributing factors to inform future predictive models. The concentrations were assessed for risk and the highest values were shown to be at least three orders of magnitude lower than the relevant occupational exposure limits (OELs). Estimated values do not appear to present a significant exposure hazard within the conditions of our emission rate estimates.
Gilsenan, M B; Lambe, J; Gibney, M J
2003-11-01
A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, T.E.; Enoch, K.G.
2002-08-01
CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than asmore » point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.« less
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Numerical simulation of aerobic exercise as a countermeasure in human spaceflight
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
The objective of this work is to analyse the efficacy of long-term regular exercise on relevant cardiovascular parameters when the human body is also exposed to microgravity. Computer simulations are an important tool which may be used to predict and analyse these possible effects, and compare them with in-flight experiments. We based our study on a electrical-like computer model (NELME: Numerical Evaluation of Long-term Microgravity Effects) which was developed in our laboratory and validated with the available data, focusing on the cardiovascu-lar parameters affected by changes in gravity exposure. NELME is based on an electrical-like control system model of the physiological changes, that are known to take place when grav-ity changes are applied. The computer implementation has a modular architecture. Hence, different output parameters, potential effects, organs and countermeasures can be easily imple-mented and evaluated. We added to the previous cardiovascular system module a perturbation module to evaluate the effect of regular exercise on the output parameters previously studied. Therefore, we simulated a well-known countermeasure with different protocols of exercising, as a pattern of input electric-like perturbations on the basic module. Different scenarios have been numerically simulated for both men and women, in different patterns of microgravity, reduced gravity and time exposure. Also EVAs were simulated as perturbations to the system. Results show slight differences in gender, with more risk reduction for women than for men after following an aerobic exercise pattern during a simulated mission. Also, risk reduction of a cardiovascular malfunction is evaluated, with a ceiling effect found in all scenarios. A turning point in vascular resistance for a long-term exposure of microgravity below 0.4g has been found of particular interest. In conclusion, we show that computer simulations are a valuable tool to analyse different effects of long-term microgravity exposure on the human body. Potential countermeasures such as physical exercise can also be evaluated as an induced perturbation into the system. Relevant results are compatible with existing data, and are of valuable interest as an assessment of the efficacy of aerobic exercise as a countermeasure in future missions to Mars.
Munschy, C; Bodin, N; Potier, M; Héas-Moisan, K; Pollono, C; Degroote, M; West, W; Hollanda, S J; Puech, A; Bourjea, J; Nikolic, N
2016-07-01
The contamination of albacore tuna (Thunnus alalunga) by Persistent Organic Pollutants (POPs), namely polychlorinated biphenyls (PCBs) and dichlorodiphenyl-trichloroethane (DDT), was investigated in individuals collected from Reunion Island (RI) and South Africa's (SA) southern coastlines in 2013, in relation to biological parameters and feeding ecology. The results showed lower PCB and DDT concentrations than those previously reported in various tuna species worldwide. A predominance of DDTs over PCBs was revealed, reflecting continuing inputs of DDT. Tuna collected from SA exhibited higher contamination levels than those from RI, related to higher dietary inputs and higher total lipid content. Greater variability in contamination levels and profiles was identified in tuna from RI, explained by a higher diversity of prey and more individualistic foraging behaviour. PCB and DDT contamination levels and profiles varied significantly in tuna from the two investigated areas, probably reflecting exposure to different sources of contamination. Copyright © 2016 Elsevier Inc. All rights reserved.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
Weber, Denis; Schaefer, Dieter; Dorgerloh, Michael; Bruns, Eric; Goerlitz, Gerhard; Hammel, Klaus; Preuss, Thomas G; Ratte, Hans Toni
2012-04-01
A flow-through system was developed to investigate the effects of time-variable exposure of pesticides on algae. A recently developed algae population model was used for simulations supported and verified by laboratory experiments. Flow-through studies with Desmodesmus subspicatus and Pseudokirchneriella subcapitata under time-variable exposure to isoproturon were performed, in which the exposure patterns were based on the results of FOrum for Co-ordination of pesticide fate models and their USe (FOCUS) model calculations for typical exposure situations via runoff or drain flow. Different types of pulsed exposure events were realized, including a whole range of repeated pulsed and steep peaks as well as periods of constant exposure. Both species recovered quickly in terms of growth from short-term exposure and according to substance dissipation from the system. Even at a peak 10 times the maximum predicted environmental concentration of isoproturon, only transient effects occurred on algae populations. No modified sensitivity or reduced growth was observed after repeated exposure. Model predictions of algal growth in the flow-through tests agreed well with the experimental data. The experimental boundary conditions and the physiological properties of the algae were used as the only model input. No calibration or parameter fitting was necessary. The combination of the flow-through experiments with the algae population model was revealed to be a powerful tool for the assessment of pulsed exposure on algae. It allowed investigating the growth reduction and recovery potential of algae after complex exposure, which is not possible with standard laboratory experiments alone. The results of the combined approach confirm the beneficial use of population models as supporting tools in higher-tier risk assessments of pesticides. Copyright © 2012 SETAC.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
APPROACHES TO ECOSYSTEM AND HUMAN EXPOSURE TO MERCURY FOR SENSITIVE POPULATIONS
Both human and ecosystem exposure studies evaluate exposure of sensitive and vulnerable populations. We will discuss how ecosystem exposure modeling studies completed for input into the US Clean Air Mercury Rule (CAMR) to evaluate the response of aquatic ecosystems to changes in ...
Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties
NASA Technical Reports Server (NTRS)
Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.
2015-01-01
For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.
A physiologically based pharmacokinetic (PBPK) model was developed within the Exposure Related Dose Estimating Model (ERDEM) framework to investigate selected exposure inputs related to recognized exposure scenarios of infants and children to N-methyl carbamate pesticides as spec...
Achromatical Optical Correlator
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
NASA Astrophysics Data System (ADS)
Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.
2009-04-01
Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.
Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook
1980-04-01
82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance
Stochastic Human Exposure and Dose Simulation Model for Pesticides
SHEDS-Pesticides (Stochastic Human Exposure and Dose Simulation Model for Pesticides) is a physically-based stochastic model developed to quantify exposure and dose of humans to multimedia, multipathway pollutants. Probabilistic inputs are combined in physical/mechanistic algorit...
Effect of Atomic Oxygen Exposure on Surface Resistivity Change of Spacecraft Insulator Material
NASA Astrophysics Data System (ADS)
Mundari, Noor Danish Ahrar; Khan, Arifur Rahman; Chiga, Masaru; Okumura, Teppei; Masui, Hirokazu; Iwata, Minoru; Toyoda, Kazuhiro; Cho, Mengu
Spacecraft surface charging can lead to arcing and a loss of electricity generation capability in solar panels or even loss of a satellite. The charging problem may be further aggravated by atomic oxygen (AO) exposure in Low Earth orbits, which modifies the surface of materials like polyimide, Teflon, anti-reflective coatings, cover glass etc, used on satellite surfaces, affecting materials properties, such as resistivity, secondary electron emissivity and photo emission, which govern the charging behavior. These properties are crucial input parameters for spacecraft charging analysis. To study the AO exposure effect on charging governing properties, an atomic oxygen exposure facility based on laser detonation of oxygen was built. The facility produces AO with a peak velocity value around 10-12km/s and a higher flux than that existing in orbit. After exposing the polyimide test material to the equivalent of 10 years of AO fluence at an altitude of 700-800 km, surface charging properties like surface resistivity and volume resistivity were measured. The measurement was performed in a vacuum using the charge storage decay method at room temperature, which is considered the most appropriate for measuring resistivity for space applications. The results show that the surface resistivity increases and the volume resistivity remains almost the same for the AO exposure fluence of 5.4×1018 atoms cm-2.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Albering, H J; Rila, J P; Moonen, E J; Hoogewerff, J A; Kleinjans, J C
1999-01-01
A human health risk assessment has been performed in relation to recreational activities on two artificial freshwater lakes along the river Meuse in The Netherlands. Although the discharges of contaminants into the river Meuse have been reduced in the last decades, which is reflected in decreasing concentrations of pollutants in surface water and suspended matter, the levels in sediments are more persistent. Sediments of the two freshwater lakes appear highly polluted and may pose a health risk in relation to recreational activities. To quantify health risks for carcinogenic (e.g., polycyclic aromatic hydrocarbons) as well as noncarcinogenic compounds (e.g., heavy metals), an exposure assessment model was used. First, we used a standard model that solely uses data on sediment pollution as the input parameter, which is the standard procedure in sediment quality assessments in The Netherlands. The highest intake appeared to be associated with the consumption of contaminated fish and resulted in a health risk for Pb and Zn (hazard index exceeded 1). For the other heavy metals and for benzo(a)pyrene, the total averaged exposure levels were below levels of concern. Secondly, input data for a more location-specific calculation procedure were provided via analyses of samples from sediment, surface water, and suspended matter. When these data (concentrations in surface water) were taken into account, the risk due to consumption of contaminated fish decreased by more than two orders of magnitude and appeared to be negligible. In both exposure assessments, many assumptions were made that contribute to a major degree to the uncertainty of this risk assessment. However, this health risk evaluation is useful as a screening methodology for assessing the urgency of sediment remediation actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Albering, H J; Rila, J P; Moonen, E J; Hoogewerff, J A; Kleinjans, J C
1999-01-01
A human health risk assessment has been performed in relation to recreational activities on two artificial freshwater lakes along the river Meuse in The Netherlands. Although the discharges of contaminants into the river Meuse have been reduced in the last decades, which is reflected in decreasing concentrations of pollutants in surface water and suspended matter, the levels in sediments are more persistent. Sediments of the two freshwater lakes appear highly polluted and may pose a health risk in relation to recreational activities. To quantify health risks for carcinogenic (e.g., polycyclic aromatic hydrocarbons) as well as noncarcinogenic compounds (e.g., heavy metals), an exposure assessment model was used. First, we used a standard model that solely uses data on sediment pollution as the input parameter, which is the standard procedure in sediment quality assessments in The Netherlands. The highest intake appeared to be associated with the consumption of contaminated fish and resulted in a health risk for Pb and Zn (hazard index exceeded 1). For the other heavy metals and for benzo(a)pyrene, the total averaged exposure levels were below levels of concern. Secondly, input data for a more location-specific calculation procedure were provided via analyses of samples from sediment, surface water, and suspended matter. When these data (concentrations in surface water) were taken into account, the risk due to consumption of contaminated fish decreased by more than two orders of magnitude and appeared to be negligible. In both exposure assessments, many assumptions were made that contribute to a major degree to the uncertainty of this risk assessment. However, this health risk evaluation is useful as a screening methodology for assessing the urgency of sediment remediation actions. Images Figure 1 Figure 2 Figure 3 PMID:9872714
NASA Astrophysics Data System (ADS)
Park, Ji-Hwan; Oh, Seung-Ju; Lee, Hyo-Chang; Kim, Yu-Sin; Kim, Young-Cheol; Kim, June Young; Ha, Chang-Seoung; Kwon, Soon-Ho; Lee, Jung-Joong; Chung, Chin-Wook
2014-10-01
As the critical dimension of the nano-device shrinks, an undesired etch profile occurs during plasma etch process. One of the reasons is the local electric field due to the surface charge accumulation. To demonstrate the surface charge accumulation, an anodic aluminum oxide (AAO) membrane which has high aspect ratio is used. The potential difference between top electrode and bottom electrode in an anodic aluminum oxide contact structure is measured during inductively coupled plasma exposure. The voltage difference is changed with external discharge conditions, such as gas pressure, input power, and gas species and the result is analyzed with the measured plasma parameters.
Armstrong, Thomas W; Haas, Charles N
2007-08-01
Evaluation of a quantitative microbial risk assessment (QMRA) model for Legionnaires' disease (LD) required Legionella exposure estimates for several well-documented LD outbreaks. Reports for a whirlpool spa and two natural spring spa outbreaks provided data for the exposure assessment, as well as rates of infection and mortality. Exposure estimates for the whirlpool spa outbreak employed aerosol generation, water composition, exposure duration data, and building ventilation parameters with a two-zone model. Estimates for the natural hot springs outbreaks used bacterial water to air partitioning coefficients and exposure duration information. The air concentration and dose calculations used input parameter distributions with Monte Carlo simulations to estimate exposures as probability distributions. The assessment considered two sets of assumptions about the transfer of Legionella from the water phase to the aerosol emitted from the whirlpool spa. The estimated air concentration near the whirlpool spa was 5 to 18 colony forming units per cubic meter (CFU/m(3)) and 50 to 180 CFU/m(3) for each of the alternate assumptions. The estimated 95th percentile ranges of Legionella dose for workers within 15 m of the whirlpool spa were 0.13-3.4 CFU and 1.3-34.5 CFU, respectively. The modeling for hot springs Spas 1 and 2 resulted in estimated arithmetic mean air concentrations of 360 and 17 CFU/m(3), respectively, and 95 percentile ranges for Legionella dose of 28 to 67 CFU and 1.1 to 3.7 CFU, respectively. The Legionella air concentration estimates fall in the range of limited reports on air concentrations of Legionella (0.33 to 190 CFU/m(3)) near showers, aerated faucets, and baths during filling with Legionella-contaminated water. These measurements may provide some indication that the estimates are of a reasonable magnitude, but they do not clarify the exposure estimates accuracy, since they were not obtained during LD outbreaks. Further research to improve the data used for the Legionella exposure assessment would strengthen the results. Several of the primary additional data needs include improved data for bacterial water to air partitioning coefficients, better accounting of time-activity-distance patterns and exposure potential in outbreak reports, and data for Legionella-containing aerosol viability decay instead of loss of capability for growth in culture.
Cancer risk of polycyclic aromatic hydrocarbons (PAHs) in the soils from Jiaozhou Bay wetland.
Yang, Wei; Lang, Yinhai; Li, Guoliang
2014-10-01
To estimate the cancer risk exposed to the PAHs in Jiaozhou Bay wetland soils, a probabilistic health risk assessment was conducted based on Monte Carlo simulations. A sensitivity analysis was performed to determine the input variables that contribute most to the cancer risk assessment. Three age groups were selected to estimate the cancer risk via four exposure pathways (soil ingestion, food ingestion, dermal contact and inhalation). The results revealed that the 95th percentiles cancer risks for children, teens and adults were 9.11×10(-6), 1.04×10(-5) and 7.08×10(-5), respectively. The cancer risks for three age groups were at acceptable range (10(-6)-10(-4)), indicating no potential cancer risk. For different exposure pathways, food ingestion was the major exposure pathway. For 7 carcinogenic PAHs, the cancer risk caused by BaP was the highest. Sensitivity analysis demonstrated that the parameters of exposure duration (ED) and sum of converted 7 carcinogenic PAHs concentrations in soil based on BaPeq (CSsoil) contribute most to the total uncertainty. This study provides a comprehensive risk assessment on carcinogenic PAHs in Jiaozhou Bay wetland soils, and might be useful in providing potential strategies of cancer risk prevention and controlling. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
van Ginneken, Meike; Oron, Gideon
2000-09-01
This study assesses health risks to consumers due to the use of agricultural products irrigated with reclaimed wastewater. The analysis is based on a definition of an exposure model which takes into account several parameters: (1) the quality of the applied wastewater, (2) the irrigation method, (3) the elapsed times between irrigation, harvest, and product consumption, and (4) the consumers' habits. The exposure model is used for numerical simulation of human consumers' risks using the Monte Carlo simulation method. The results of the numerical simulation show large deviations, probably caused by uncertainty (impreciseness in quality of input data) and variability due to diversity among populations. There is a 10-orders of magnitude difference in the risk of infection between the different exposure scenarios with the same water quality. This variation indicates the need for setting risk-based criteria for wastewater reclamation rather than single water quality guidelines. Extra data are required to decrease uncertainty in the risk assessment. Future research needs to include definition of acceptable risk criteria, more accurate dose-response modeling, information regarding pathogen survival in treated wastewater, additional data related to the passage of pathogens into and in the plants during irrigation, and information regarding the behavior patterns of the community of human consumers.
Evaluation of operational parameters role on the emission of fumes.
Sajedifar, Javad; Kokabi, Amir Hossein; Farhang Dehghan, Somayeh; Mehri, Ahmad; Azam, Kamal; Golbabaei, Farideh
2017-12-12
Electric arc welding is a routine operation in the construction of metallic structures, but the fumes generated during the welding process can threaten the health of welders. Fumes are undesirable products of the majority of welding operations and may have various detrimental effects on health. The purpose of this study was to investigate the effect of operational parameters of the shielded metal arc welding (SMAW) process on the emission of fumes. A dust monitor was used to measure the number and mass concentration of fumes generated by SMAW. Measurements were made at the distances of 23cm (hood inlet) and 41cm (welder\\'s breathing zone) from the weld point, with different values assigned to three operational parameters, namely current intensity, travel speed, and heat input (HI). Number concentration (NC) decreased with the increase in particle size. The highest mass concentrations (MC) were observed for MC1 (0.35μm-0.5μm) and MC8 (Larger than 6.5μm). For reducing exposures to fumes, welders are recommended to use the lowest voltage and amperage and the highest travel speed to the extent that does not compromise in the quality of welds. For assessment of exposure to airborne particles in industrial workplaces and specially in welding operations, it is thought that taking, solely, mass concentration in to consideration and lack of attention to number concentration would not be able to reflect accurate assessment of the adverse effects of particles on the body organs.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
Relationship between Human Pupillary Light Reflex and Circadian System Status
Bonmati-Carrion, Maria Angeles; Hild, Konstanze; Isherwood, Cheryl; Sweeney, Stephen J.; Revell, Victoria L.; Skene, Debra J.; Rol, Maria Angeles; Madrid, Juan Antonio
2016-01-01
Intrinsically photosensitive retinal ganglion cells (ipRGCs), whose photopigment melanopsin has a peak of sensitivity in the short wavelength range of the spectrum, constitute a common light input pathway to the olivary pretectal nucleus (OPN), the pupillary light reflex (PLR) regulatory centre, and to the suprachiasmatic nuclei (SCN), the major pacemaker of the circadian system. Thus, evaluating PLR under short wavelength light (λmax ≤ 500 nm) and creating an integrated PLR parameter, as a possible tool to indirectly assess the status of the circadian system, becomes of interest. Nine monochromatic, photon-matched light stimuli (300 s), in 10 nm increments from λmax 420 to 500 nm were administered to 15 healthy young participants (8 females), analyzing: i) the PLR; ii) wrist temperature (WT) and motor activity rhythms (WA), iii) light exposure (L) pattern and iv) diurnal preference (Horne-Östberg), sleep quality (Pittsburgh) and daytime sleepiness (Epworth). Linear correlations between the different PLR parameters and circadian status index obtained from WT, WA and L recordings and scores from questionnaires were calculated. In summary, we found markers of robust circadian rhythms, namely high stability, reduced fragmentation, high amplitude, phase advance and low internal desynchronization, were correlated with a reduced PLR to 460–490 nm wavelengths. Integrated circadian (CSI) and PLR (cp-PLR) parameters are proposed, that also showed an inverse correlation. These results demonstrate, for the first time, the existence of a close relationship between the circadian system robustness and the pupillary reflex response, two non-visual functions primarily under melanopsin-ipRGC input. PMID:27636197
The role of CMEs in the refilling of Mercury's exosphere
NASA Astrophysics Data System (ADS)
Lichtenegger, H. I. M.; Lammer, H.; Kallio, E.; Mura, A.; Wurz, P.; Millio, A.; Torka, K.; Livi, S.; Barabash, S.; Orsini, S.
A better understanding of the connection between the solar plasma environment and surface particle release processes from Mercury is needed for planned exospheric and remote surface geochemical studies by the Neutral Particle Analyzer Ion Spectrometer sensors ELENA, STROFIO, MIPA and PICAM of the SERENA instrument on board of ESA's BepiColombo planetary orbiter MPO. We study the exosphere refilling of various elements caused by sputtering during the exposure of CMEs from Mercury's surface by applying a quasi-neutral hybrid model and by using a survey of potential surface analogues, which are based on laboratory studied Lunar surface regolith and hypothetical analogue materials as derived form experimental studies. The formation and refilling of Mercury's exosphere during CME exposure is compared with usual solar wind cases by considering various parameters, such as regolith porosity, binding energies and elemental fractionation of the surface minerals. For studying the influence of these parameters we use the derived geochemical surface composition and the exposed surface are as an input for a 3-D exospheric model for studying whether the measurements of exospheric particles by the particle detectors is feasible along the MPO spacecraft orbit. Finally we find a denser exosphere environment distributed over a larger planetary area during collisions of CMEs or magnetic clouds with Mercury.
EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) ...
Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric conditions yielding sub-optimal values, or complete non-returns. The United States Environmental Protection Agency’s Exposure Methods and Measurements and Computational Exposure Divisions are investigating the viability of supplemental modelled LAI inputs into satellite-derived data streams to support various regional and local scale air quality models for retrospective and future climate assessments. In this present study, one-year (2002) of plot level stand characteristics at four study sites located in Virginia and North Carolina are used to calibrate species-specific plant parameters in a semi-empirical biogeochemical model. The Environmental Policy Integrated Climate (EPIC) model was designed primarily for managed agricultural field crop ecosystems, but also includes managed woody species that span both xeric and mesic sites (e.g., mesquite, pine, oak, etc.). LAI was simulated using EPIC at a 4 km2 and 12 km2 grid coincident with the regional Community Multiscale Air Quality Model (CMAQ) grid. LAI comparisons were made between model-simulated and MODIS-derived LAI. Field/satellite-upscaled LAI was also compared to the corresponding MODIS LAI value. Preliminary results show field/satel
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
JWST Associations overview: automated generation of combined products
NASA Astrophysics Data System (ADS)
Alexov, Anastasia; Swade, Daryl; Bushouse, Howard; Diaz, Rosa; Eisenhamer, Jonathan; Hack, Warren; Kyprianou, Mark; Levay, Karen; Rahmani, Christopher; Swam, Mike; Valenti, Jeff
2018-01-01
We are presenting the design of the James Webb Space Telescope (JWST) Data Management System (DMS) automated processing of Associations. An Association captures the relationship between exposures and higher level data products, such as combined mosaics created from dithered and tiled observations. The astronomer’s intent is captured within the Proposal Planning System (PPS) and provided to DMS as candidate associations. These candidates are converted into Association Pools and Association Generator Tables that serve as input to automated processing which create the combined data products. Association Pools are generated to capture a list of exposures that could potentially form associations and provide relevant information about those exposures. The Association Generator using definitions on groupings creates one or more Association Tables from a single input Association Pool. Each Association Table defines a set of exposures to be combined and the ruleset of the combination to be performed; the calibration software creates Associated data products based on these input tables. The initial design produces automated Associations within a proposal. Additionally this JWST overall design is conducive to eventually produce Associations for observations from multiple proposals, similar to the Hubble Legacy Archive (HLA).
Going beyond Input Quantity: "Wh"-Questions Matter for Toddlers' Language and Cognitive Development
ERIC Educational Resources Information Center
Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha
2017-01-01
There are clear associations between the overall quantity of input children are exposed to and their vocabulary acquisition. However, by uncovering specific features of the input that matter, we can better understand the mechanisms involved in vocabulary learning. We examine whether exposure to "wh"-questions, a challenging quality of…
Does Input Enhancement Work for Learning Politeness Strategies?
ERIC Educational Resources Information Center
Khatib, Mohammad; Safari, Mahmood
2013-01-01
The present study investigated the effect of input enhancement on the acquisition of English politeness strategies by intermediate EFL learners. Two groups of freshman English majors were randomly assigned to the experimental (enhanced input) group and the control (mere exposure) group. Initially, a TOEFL test and a discourse completion test (DCT)…
Ex Priori: Exposure-based Prioritization across Chemical Space
EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...
HUMAN-ECOSYSTEM INTERACTIONS: THE CASE OF MERCURY
Human and ecosystem exposure studies evaluate exposure of sensitive and vulnerable populations. We will discuss how ecosystem exposure modeling studies completed for input into the US Clean Air Mercury Rule (CAMR) to evaluate the response of aquatic ecosystems to changes in mercu...
Human - Ecosystem Interactions: The Case of Mercury
Human and ecosystem exposure studies evaluate exposure of sensitive and vulnerable populations. We will discuss how ecosystem exposure modeling studies completed for input into the US Clean Air Mercury Rule (CAMR) to evaluate the response of aquatic ecosystems to changes in mercu...
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
2017-05-01
ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity
User's Guide for RESRAD-OFFSITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnanapragasam, E.; Yu, C.
2015-04-01
The RESRAD-OFFSITE code can be used to model the radiological dose or risk to an offsite receptor. This User’s Guide for RESRAD-OFFSITE Version 3.1 is an update of the User’s Guide for RESRAD-OFFSITE Version 2 contained in the Appendix A of the User’s Manual for RESRAD-OFFSITE Version 2 (ANL/EVS/TM/07-1, DOE/HS-0005, NUREG/CR-6937). This user’s guide presents the basic information necessary to use Version 3.1 of the code. It also points to the help file and other documents that provide more detailed information about the inputs, the input forms and features/tools in the code; two of the features (overriding the source termmore » and computing area factors) are discussed in the appendices to this guide. Section 2 describes how to download and install the code and then verify the installation of the code. Section 3 shows ways to navigate through the input screens to simulate various exposure scenarios and to view the results in graphics and text reports. Section 4 has screen shots of each input form in the code and provides basic information about each parameter to increase the user’s understanding of the code. Section 5 outlines the contents of all the text reports and the graphical output. It also describes the commands in the two output viewers. Section 6 deals with the probabilistic and sensitivity analysis tools available in the code. Section 7 details the various ways of obtaining help in the code.« less
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Pujol, Laure; Johnson, Nicholas Brian; Magras, Catherine; Albert, Isabelle; Membré, Jeanne-Marie
2015-10-15
In a previous study, a quantitative microbial exposure assessment (QMEA) model applied to an aseptic-UHT food process was developed [Pujol, L., Albert, I., Magras, C., Johnson, N. B., Membré, J. M. Probabilistic exposure assessment model to estimate aseptic UHT product failure rate. 2015 International Journal of Food Microbiology. 192, 124-141]. It quantified Sterility Failure Rate (SFR) associated with Bacillus cereus and Geobacillus stearothermophilus per process module (nine modules in total from raw material reception to end-product storage). Previously, the probabilistic model inputs were set by experts (using knowledge and in-house data). However, only the variability dimension was taken into account. The model was then improved using expert elicitation knowledge in two ways. First, the model was refined by adding the uncertainty dimension to the probabilistic inputs, enabling to set a second order Monte Carlo analysis. The eight following inputs, and their impact on SFR, are presented in detail in this present study: D-value for each bacteria of interest (B. cereus and G. stearothermophilus) associated with the inactivation model for the UHT treatment step, i.e., two inputs; log reduction (decimal reduction) number associated with the inactivation model for the packaging sterilization step for each bacterium and each part of the packaging (product container and sealing component), i.e., four inputs; and bacterial spore air load of the aseptic tank and the filler cabinet rooms, i.e., two inputs. Second, the model was improved by leveraging expert knowledge to develop further the existing model. The proportion of bacteria in the product which settled on surface of pipes (between the UHT treatment and the aseptic tank on one hand, and between the aseptic tank and the filler cabinet on the other hand) leading to a possible biofilm formation for each bacterium, was better characterized. It was modeled as a function of the hygienic design level of the aseptic-UHT line: the experts provided the model structure and most of the model parameters values. Mean of SFR was estimated to 10×10(-8) (95% Confidence Interval=[0×10(-8); 350×10(-8)]) and 570×10(-8) (95% CI=[380×10(-8); 820×10(-8)]) for B. cereus and G. stearothermophilus, respectively. These estimations were more accurate (since the confidence interval was provided) than those given by the model with only variability (for which the estimates were 15×10(-8) and 580×10(-8) for B. cereus and G. stearothermophilus, respectively). The updated model outputs were also compared with those obtained when inputs were described by a generic distribution, without specific information related to the case-study. Results showed that using a generic distribution can lead to unrealistic estimations (e.g., 3,181,000 product units contaminated by G. stearothermophilus among 10(8) product units produced) and emphasized the added value of eliciting information from experts from the relevant specialist field knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Tie, Xiaoxiu; Li, Shuo; Feng, Yilin; Lai, Biqin; Liu, Sheng; Jiang, Bin
2018-06-01
In the visual cortex, sensory deprivation causes global augmentation of the amplitude of AMPA receptor-mediated miniature EPSCs in layer 2/3 pyramidal cells and enhancement of NMDA receptor-dependent long-term potentiation (LTP) in cells activated in layer 4, effects that are both rapidly reversed by light exposure. Layer 2/3 pyramidal cells receive both feedforward input from layer 4 and intra-cortical lateral input from the same layer, LTP is mainly induced by the former input. Whether feedforward excitatory synaptic strength is affected by visual deprivation and light exposure, how this synaptic strength correlates with the magnitude of LTP in this pathway, and the underlying mechanism have not been explored. Here, we showed that in juvenile mice, both dark rearing and dark exposure reduced the feedforward excitatory synaptic strength, and the effects can be reversed completely by 10-12 h and 6-8 h light exposure, respectively. However, inhibition of NMDA receptors by CPP or mGluR5 by MPEP, prevented the effect of light exposure on the mice reared in the dark from birth, while only inhibition of NMDAR prevented the effect of light exposure on dark-exposed mice. These results suggested that the activation of both NMDAR and mGluR5 are essential in the light exposure reversal of feedforward excitatory synaptic strength in the dark reared mice from birth; while in the dark exposed mice, only activation of NMDAR is required. Copyright © 2018. Published by Elsevier Ltd.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Quintaneiro, C; Ranville, J; Nogueira, A J A
2015-08-01
The input of metals into freshwater ecosystems from natural and anthropogenic sources impairs water quality and can lead to biological alterations in organisms and plants, compromising the structure and the function of these ecosystems. Biochemical biomarkers may provide early detection of exposure to contaminants and indicate potential effects at higher levels of biological organisation. The effects of 48h exposures to copper and zinc on Atyaephyra desmarestii and Echinogammarus meridionalis were evaluated with a battery of biomarkers of oxidative stress and the determination of ingestion rates. The results showed different responses of biomarkers between species and each metal. Copper inhibited the enzymatic defence system of both species without signs of oxidative damage. Zinc induced the defence system in E. meriodionalis with no evidence of oxidative damage. However, in A. desmarestii exposed to zinc was observed oxidative damage. In addition, only zinc had significantly reduced the ingestion rate and just for E. meridionalis. The value of the integrated biomarkers response increased with concentration of both metals, which indicates that might be a valuable tool to interpretation of data as a whole, as different parameters have different weight according to type of exposure. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Qiao, Yangzi; Zong, Yujin; Yin, Hui; Chang, Nan; Li, Zhaopeng; Wan, Mingxi
2014-09-01
Phase-shift nano-emulsions (PSNEs) with a small initial diameter in nanoscale have the potential to leak out of the blood vessels and to accumulate at the target point of tissue. At desired location, PSNEs can undergo acoustic droplet vaporization (ADV) process, change into gas bubbles and enhance focused ultrasound efficiency. The threshold of droplet vaporization and influence of acoustic parameters have always been research hotspots in order to spatially control the potential of bioeffects and optimize experimental conditions. However, when the pressure is much higher than PSNEs' vaporization threshold, there were little reports on their cavitation and thermal effects. In this study, PSNEs induced cavitation and ablation effects during pulsed high-intensity focused ultrasound (HIFU) exposure were investigated, including the spatial and temporal information and the influence of acoustic parameters. Two kinds of tissue-mimicking phantoms with uniform PSNEs were prepared because of their optical transparency. The Sonoluminescence (SL) method was employed to visualize the cavitation activities. And the ablation process was observed as the heat deposition could produce white lesion. Precisely controlled HIFU cavitation and ablation can be realized at a relatively low input power. But when the input power was high, PSNEs can accelerate cavitation and ablation in pre-focal region. The cavitation happened layer by layer advancing the transducer. While the lesion appeared to be separated into two parts, one in pre-focal region stemmed from one point and grew quickly, the other in focal region grew much more slowly. The influence of duty cycle has also been examined. Longer pulse off time would cause heat transfer to the surrounding media, and generate smaller lesion. On the other hand, this would give outer layer bubbles enough time to dissolve, and inner bubbles can undergo violent collapse and emit bright light. Copyright © 2014 Elsevier B.V. All rights reserved.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
BEDORE, LISA M.; PEÑA, ELIZABETH D.; GRIFFIN, ZENZI M.; HIXON, J. GREGORY
2018-01-01
This study evaluates the effects of Age of Exposure to English (AoEE) and Current Input/Output on language performance in a cross-sectional sample of Spanish–English bilingual children. First- (N= 586) and third-graders (N= 298) who spanned a wide range of bilingual language experience participated. Parents and teachers provided information about English and Spanish language use. Short tests of semantic and morphosyntactic development in Spanish and English were used to quantify children’s knowledge of each language. There were significant interactions between AoEE and Current Input/Output for children at third grade in English and in both grades for Spanish. In English, the relationship between AoEE and language scores were linear for first- and third-graders. In Spanish a nonlinear relationship was observed. We discuss how much of the variance was accounted for by AoEE and Current Input/Output. PMID:26916066
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, G. Tim; Hartman, Larry; Stagich, Brooke
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, T.; Stagich, B.
Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
Obiri, Samuel; Yeboah, Philip O.; Osae, Shiloh; Adu-kumi, Sam; Cobbina, Samuel J.; Armah, Frederick A.; Ason, Benjamin; Antwi, Edward; Quansah, Reginald
2016-01-01
A human health risk assessment of artisanal miners exposed to toxic metals in water bodies and sediments in the PresteaHuni Valley District of Ghana was carried out in this study, in line with US EPA risk assessment guidelines. A total of 70 water and 30 sediment samples were collected from surface water bodies in areas impacted by the operations of artisanal small-scale gold mines in the study area and analyzed for physico-chemical parameters such as pH, TDS, conductivity, turbidity as well as metals and metalloids such as As, Cd, Hg and Pb at CSIR—Water Research Institute using standard methods for the examination of wastewater as outlined by American Water Works Association (AWWA). The mean concentrations of As, Cd, Hg and Pb in water samples ranged from 15 μg/L to 325 μg/L (As), 0.17 μg/L to 340 μg/L (Cd), 0.17 μg/L to 122 μg/L (Pb) and 132 μg/L to 866 μg/L (Hg), respectively. These measured concentrations of arsenic (As), mercury (Hg), cadmium (Cd) and lead (Pb) were used as input parameters to calculate the cancer and non-cancer health risks from exposure to these metals in surface water bodies and sediments based on an occupational exposure scenario using central tendency exposure (CTE) and reasonable maximum exposure (RME) parameters. The results of the non-cancer human health risk assessment for small-scale miners working around river Anikoko expressed in terms of hazard quotients based on CTE parameters are as follows: 0.04 (Cd), 1.45 (Pb), 4.60 (Hg) and 1.98 (As); while cancer health risk faced by ASGM miners in Dumase exposed to As in River Mansi via oral ingestion of water is 3.1 × 10−3. The hazard quotient results obtained from this study in most cases were above the HQ guidance value of 1.0, furthermore the cancer health risk results were found to be higher than the USEPA guidance value of 1 × 10−4 to 1 × 10−6. These findings call for case-control epidemiological studies to establish the relationship between exposure to the aforementioned toxic chemicals and diseases associated with them as identified in other studies conducted in different countries as basis for developing policy interventions to address the issue of ASGM mine workers safety in Ghana. PMID:26797625
Obiri, Samuel; Yeboah, Philip O; Osae, Shiloh; Adu-Kumi, Sam; Cobbina, Samuel J; Armah, Frederick A; Ason, Benjamin; Antwi, Edward; Quansah, Reginald
2016-01-18
A human health risk assessment of artisanal miners exposed to toxic metals in water bodies and sediments in the PresteaHuni Valley District of Ghana was carried out in this study, in line with US EPA risk assessment guidelines. A total of 70 water and 30 sediment samples were collected from surface water bodies in areas impacted by the operations of artisanal small-scale gold mines in the study area and analyzed for physico-chemical parameters such as pH, TDS, conductivity, turbidity as well as metals and metalloids such as As, Cd, Hg and Pb at CSIR-Water Research Institute using standard methods for the examination of wastewater as outlined by American Water Works Association (AWWA). The mean concentrations of As, Cd, Hg and Pb in water samples ranged from 15 μg/L to 325 μg/L (As), 0.17 μg/L to 340 μg/L (Cd), 0.17 μg/L to 122 μg/L (Pb) and 132 μg/L to 866 μg/L (Hg), respectively. These measured concentrations of arsenic (As), mercury (Hg), cadmium (Cd) and lead (Pb) were used as input parameters to calculate the cancer and non-cancer health risks from exposure to these metals in surface water bodies and sediments based on an occupational exposure scenario using central tendency exposure (CTE) and reasonable maximum exposure (RME) parameters. The results of the non-cancer human health risk assessment for small-scale miners working around river Anikoko expressed in terms of hazard quotients based on CTE parameters are as follows: 0.04 (Cd), 1.45 (Pb), 4.60 (Hg) and 1.98 (As); while cancer health risk faced by ASGM miners in Dumase exposed to As in River Mansi via oral ingestion of water is 3.1 × 10(-3). The hazard quotient results obtained from this study in most cases were above the HQ guidance value of 1.0, furthermore the cancer health risk results were found to be higher than the USEPA guidance value of 1 × 10(-4) to 1 × 10(-6). These findings call for case-control epidemiological studies to establish the relationship between exposure to the aforementioned toxic chemicals and diseases associated with them as identified in other studies conducted in different countries as basis for developing policy interventions to address the issue of ASGM mine workers safety in Ghana.
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Oblozinsky, P.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Capote,R.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
Advances in EPA’s Rapid Exposure and Dosimetry Project (Interagency Alternatives Assessment Webinar)
Estimates of human and ecological exposures are required as critical input to risk-based prioritization and screening of chemicals. The CSS Rapid Exposure and Dosimetry project seeks to develop the data, tools, and evaluation approaches required to generate rapid and scientifical...
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
Age and input effects in the acquisition of mood in Heritage Portuguese.
Flores, Cristina; Santos, Ana Lúcia; Jesus, Alice; Marques, Rui
2017-07-01
The present study analyzes the effect of age and amount of input in the acquisition of European Portuguese as a heritage language. An elicited production task centred on mood choice in complement clauses was applied to a group of fifty bilingual children (six- to sixteen-year-olds) who are acquiring Portuguese as a minority language in a German dominant environment. The results show a significant effect of the age at testing and the amount of input in the acquisition of the subjunctive. In general, acquisition is delayed with respect to monolinguals, even though higher convergence with the monolingual grammar is observed after twelve years of age. Results also reveal that children with more exposure to the heritage language at home show faster acquisition than children from mixed households: the eight- to nine-year-old age boundary seems relevant for those speakers with more exposure, and the twelve- to thirteen-year-old age boundary for those with less exposure.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Impacts of climate change on indirect human exposure to pathogens and chemicals from agriculture.
Boxall, Alistair B A; Hardy, Anthony; Beulke, Sabine; Boucard, Tatiana; Burgin, Laura; Falloon, Peter D; Haygarth, Philip M; Hutchinson, Thomas; Kovats, R Sari; Leonardi, Giovanni; Levy, Leonard S; Nichols, Gordon; Parsons, Simon A; Potts, Laura; Stone, David; Topp, Edward; Turley, David B; Walsh, Kerry; Wellington, Elizabeth M H; Williams, Richard J
2009-04-01
Climate change is likely to affect the nature of pathogens and chemicals in the environment and their fate and transport. Future risks of pathogens and chemicals could therefore be very different from those of today. In this review, we assess the implications of climate change for changes in human exposures to pathogens and chemicals in agricultural systems in the United Kingdom and discuss the subsequent effects on health impacts. In this review, we used expert input and considered literature on climate change; health effects resulting from exposure to pathogens and chemicals arising from agriculture; inputs of chemicals and pathogens to agricultural systems; and human exposure pathways for pathogens and chemicals in agricultural systems. We established the current evidence base for health effects of chemicals and pathogens in the agricultural environment; determined the potential implications of climate change on chemical and pathogen inputs in agricultural systems; and explored the effects of climate change on environmental transport and fate of different contaminant types. We combined these data to assess the implications of climate change in terms of indirect human exposure to pathogens and chemicals in agricultural systems. We then developed recommendations on future research and policy changes to manage any adverse increases in risks. Overall, climate change is likely to increase human exposures to agricultural contaminants. The magnitude of the increases will be highly dependent on the contaminant type. Risks from many pathogens and particulate and particle-associated contaminants could increase significantly. These increases in exposure can, however, be managed for the most part through targeted research and policy changes.
Characterization of airborne particles generated from metal active gas welding process.
Guerreiro, C; Gomes, J F; Carvalho, P; Santos, T J G; Miranda, R M; Albuquerque, P
2014-05-01
This study is focused on the characterization of particles emitted in the metal active gas welding of carbon steel using mixture of Ar + CO2, and intends to analyze which are the main process parameters that influence the emission itself. It was found that the amount of emitted particles (measured by particle number and alveolar deposited surface area) are clearly dependent on the distance to the welding front and also on the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne fine particles seems to increase with the current intensity as fume-formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. These mixtures originate higher concentrations of fine particles (as measured by number of particles by cm(3) of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more severe worker's exposure.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Composite Flood Risk for Virgin Island
The Composite Flood Risk layer combines flood hazard datasets from Federal Emergency Management Agency (FEMA) flood zones, NOAA's Shallow Coastal Flooding, and the National Hurricane Center SLOSH model for Storm Surge inundation for category 1, 2, and 3 hurricanes.Geographic areas are represented by a grid of 10 by 10 meter cells and each cell has a ranking based on variation in exposure to flooding hazards: Moderate, High and Extreme exposure. Geographic areas in each input layers are ranked based on their probability of flood risk exposure. The logic was such that areas exposed to flooding on a more frequent basis were given a higher ranking. Thus the ranking incorporates the probability of the area being flooded. For example, even though a Category 3 storm surge has higher flooding elevations, the likelihood of the occurrence is lower than a Category 1 storm surge and therefore the Category 3 flood area is given a lower exposure ranking. Extreme exposure areas are those areas that are exposed to relatively frequent flooding.The ranked input layers are then converted to a raster for the creation of the composite risk layer by using cell statistics in spatial analysis. The highest exposure ranking for a given cell in any of the three input layers is assigned to the corresponding cell in the composite layer.For example, if an area (a cell) is rank as medium in the FEMA layer, moderate in the SLOSH layer, but extreme in the SCF layer, the cell will be considere
Population-based human exposure models predict the distribution of personal exposures to pollutants of outdoor origin using a variety of inputs, including: air pollution concentrations; human activity patterns, such as the amount of time spent outdoors vs. indoors, commuting, wal...
Estimates of human and ecological exposures are required as critical input to risk-based prioritization and screening of chemicals. This project seeks to develop the data, tools, and evaluation approaches required to generate rapid and scientifically-defensible exposure predictio...
Liu, Xin; Zhao, Longyu; Yu, Duo; Ma, Shumei; Liu, Xiaodong
2013-12-01
To observe the effects of extremely low frequency electromagnetic fields (ELF-EMFs) in automotive industry on occupational workers. A total of 704 workers were investigated, and 374 workers were chosen and divided into two groups (control group and exposure group) according to the inclusive criteria, namely male with age 20-40 years old and ≥ 2 years of exposure. The intensities of ELF-EMFs and noise were detected with EFA-300 Field Analyzer (Narda company, Pfullingen, Germany) and AWA5610D integrating sound level meter (Hangzhou Aihua Instruments Co., Ltd, Hangzhou, China), respectively. Survey data were collected by questionnaire, and the physical check-up was done in hospital. All the data were input into SPSS17.0 software (SPSS Inc, Chicago, USA), and the appropriate statistic analyses were carried out. The intensity of EMFs in exposure group was significantly higher than that in control group (p < 0.05), while the noise in two workplaces showed no difference (p>0.05). The survey data collected by questionnaires showed that the symptoms of loss of hair in exposure group were significantly different as compared with that in control group (p < 0.05). The check-up parameters of cardiovascular, liver and hematology system showed significant differences between the two groups (p < 0.05). Survey and check-up data suggest that exposure to ELF-EMFs might have effects on the nervous, cardiovascular, liver, and hematology system of workers.
Giménez, Marina C; Beersma, Domien G M; Bollen, Pauline; van der Linden, Matthijs L; Gordijn, Marijke C M
2014-06-01
Light is an important environmental stimulus for the entrainment of the circadian clock and for increasing alertness. The intrinsically photosensitive ganglion cells in the retina play an important role in transferring this light information to the circadian system and they are elicited in particular by short-wavelength light. Exposure to short wavelengths is reduced, for instance, in elderly people due to yellowing of the ocular lenses. This reduction may be involved in the disrupted circadian rhythms observed in aged subjects. Here, we tested the effects of reduced blue light exposure in young healthy subjects (n = 15) by using soft orange contact lenses (SOCL). We showed (as expected) that a reduction in the melatonin suppressing effect of light is observed when subjects wear the SOCL. However, after chronic exposure to reduced (short wavelength) light for two consecutive weeks we observed an increase in sensitivity of the melatonin suppression response. The response normalized as if it took place under a polychromatic light pulse. No differences were found in the dim light melatonin onset or in the amplitude of the melatonin rhythms after chronic reduced blue light exposure. The effects on sleep parameters were limited. Our results demonstrate that the non-visual light system of healthy young subjects is capable of adapting to changes in the spectral composition of environmental light exposure. The present results emphasize the importance of considering not only the short-term effects of changes in environmental light characteristics.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Application of artificial neural networks to assess pesticide contamination in shallow groundwater
Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.
2006-01-01
In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates
NASA Technical Reports Server (NTRS)
Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.
1997-01-01
Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less
Knowledge system and method for simulating chemical controlled release device performance
Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.
1991-01-01
A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Subsea release of oil from a riser: an ecological risk assessment.
Nazir, Muddassir; Khan, Faisal; Amyotte, Paul; Sadiq, Rehan
2008-10-01
This study illustrates a newly developed methodology, as a part of the U.S. EPA ecological risk assessment (ERA) framework, to predict exposure concentrations in a marine environment due to underwater release of oil and gas. It combines the hydrodynamics of underwater blowout, weathering algorithms, and multimedia fate and transport to measure the exposure concentration. Naphthalene and methane are used as surrogate compounds for oil and gas, respectively. Uncertainties are accounted for in multimedia input parameters in the analysis. The 95th percentile of the exposure concentration (EC(95%)) is taken as the representative exposure concentration for the risk estimation. A bootstrapping method is utilized to characterize EC(95%) and associated uncertainty. The toxicity data of 19 species available in the literature are used to calculate the 5th percentile of the predicted no observed effect concentration (PNEC(5%)) by employing the bootstrapping method. The risk is characterized by transforming the risk quotient (RQ), which is the ratio of EC(95%) to PNEC(5%), into a cumulative risk distribution. This article describes a probabilistic basis for the ERA, which is essential from risk management and decision-making viewpoints. Two case studies of underwater oil and gas mixture release, and oil release with no gaseous mixture are used to show the systematic implementation of the methodology, elements of ERA, and the probabilistic method in assessing and characterizing the risk.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners
ERIC Educational Resources Information Center
Lu, Jenny; Jones, Anna; Morgan, Gary
2016-01-01
There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Mani, Venkatesh; Wong, Stephanie K; Sawit, Simonette T; Calcagno, Claudia; Maceda, Cynara; Ramachandran, Sarayu; Fayad, Zahi A; Moline, Jacqueline; McLaughlin, Mary Ann
2013-04-01
In this pilot study, we hypothesize that dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has the potential to evaluate differences in atherosclerosis profiles in patients subjected to high (initial dust cloud) and low (after 13 September 2001) particulate matter (PM) exposure. Exposure to PM may be associated with adverse health effects leading to increased morbidity. Law enforcement workers were exposed to high levels of particulate pollution after working at "Ground Zero" and may exhibit accelerated atherosclerosis. 31 subjects (28 male) with high (n = 19) or low (n = 12) exposure to PM underwent DCE-MRI. Demographics (age, gender, family history, hypertension, diabetes, BMI, and smoking status), biomarkers (lipid profiles, hs-CRP, BP) and ankle-brachial index (ABI) measures (left and right) were obtained from all subjects. Differences between the high and low exposures were compared using independent samples t test. Using linear forward stepwise regression with information criteria model, independent predictors of increased area under curve (AUC) from DCE-MRI were determined using all variables as input. Confidence interval of 95 % was used and variables with p > 0.1 were eliminated. p < 0.05 was considered significant. Subjects with high exposure (HE) had significantly higher DCE-MRI AUC uptake (increased neovascularization) compared to subjects with lower exposure (LE). (AUC: 2.65 ± 0.63 HE vs. 1.88 ± 0.69 LE, p = 0.016). Except for right leg ABI, none of the other parameters were significantly different between the two groups. Regression model indicated that only HE to PM, CRP > 3.0 and total cholesterol were independently associated with increased neovascularization (in decreasing order of importance, all p < 0.026). HE to PM may increase plaque neovascularization, and thereby potentially indicate worsening atherogenic profile of "Ground Zero" workers.
Bordage, Simon; Sullivan, Stuart; Laird, Janet; Millar, Andrew J; Nimmo, Hugh G
2016-10-01
Circadian clocks allow the temporal compartmentalization of biological processes. In Arabidopsis, circadian rhythms display organ specificity but the underlying molecular causes have not been identified. We investigated the mechanisms responsible for the similarities and differences between the clocks of mature shoots and roots in constant conditions and in light : dark cycles. We developed an imaging system to monitor clock gene expression in shoots and light- or dark-grown roots, modified a recent mathematical model of the Arabidopsis clock and used this to simulate our new data. We showed that the shoot and root circadian clocks have different rhythmic properties (period and amplitude) and respond differently to light quality. The root clock was entrained by direct exposure to low-intensity light, even in antiphase to the illumination of shoots. Differences between the clocks were more pronounced in conditions where light was present than in constant darkness, and persisted in the presence of sucrose. We simulated the data successfully by modifying those parameters of a clock model that are related to light inputs. We conclude that differences and similarities between the shoot and root clocks can largely be explained by organ-specific light inputs. This provides mechanistic insight into the developing field of organ-specific clocks. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Use of partial AUC to demonstrate bioequivalence of Zolpidem Tartrate Extended Release formulations.
Lionberger, Robert A; Raw, Andre S; Kim, Stephanie H; Zhang, Xinyuan; Yu, Lawrence X
2012-04-01
FDA's bioequivalence recommendation for Zolpidem Tartrate Extended Release Tablets is the first to use partial AUC (pAUC) metrics for determining bioequivalence of modified-release dosage forms. Modeling and simulation studies were performed to aid in understanding the need for pAUC measures and also the proper pAUC truncation times. Deconvolution techniques, In Vitro/In Vivo Correlations, and the CAT (Compartmental Absorption and Transit) model were used to predict the PK profiles for zolpidem. Models were validated using in-house data submitted to the FDA. Using dissolution profiles expressed by the Weibull model as input for the CAT model, dissolution spaces were derived for simulated test formulations. The AUC(0-1.5) parameter was indicative of IR characteristics of early exposure and effectively distinguished among formulations that produced different pharmacodynamic effects. The AUC(1.5-t) parameter ensured equivalence with respect to the sustained release phase of Ambien CR. The variability of AUC(0-1.5) is higher than other PK parameters, but is reasonable for use in an equivalence test. In addition to the traditional PK parameters of AUCinf and Cmax, AUC(0-1.5) and AUC(1.5-t) are recommended to provide bioequivalence measures with respect to label indications for Ambien CR: onset of sleep and sleep maintenance.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
SHEDS-HT: An Integrated Probabilistic Exposure Model for ...
United States Environmental Protection Agency (USEPA) researchers are developing a strategy for highthroughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologically relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. Based on probabilistic methods and algorithms developed for The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals (SHEDS-MM), a new mechanistic modeling approach has been developed to accommodate high-throughput (HT) assessment of exposure potential. In this SHEDS-HT model, the residential and dietary modules of SHEDS-MM have been operationally modified to reduce the user burden, input data demands, and run times of the higher-tier model, while maintaining critical features and inputs that influence exposure. The model has been implemented in R; the modeling framework links chemicals to consumer product categories or food groups (and thus exposure scenarios) to predict HT exposures and intake doses. Initially, SHEDS-HT has been applied to 2507 organic chemicals associated with consumer products and agricultural pesticides. These evaluations employ data from recent USEPA efforts to characterize usage (prevalence, frequency, and magnitude), chemical composition, and exposure scenarios for a wide range of consumer products. In modeling indirec
Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maldonado, Delis
2012-06-01
The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorizedmore » Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes included increasing the time horizon beyond 1,050 years (yr), and using the radionuclide concentrations provided by the DOE-PPPO as inputs into the codes. The deterministic peak doses were evaluated within time horizons of 70 yr (for the Landfill Worker and Trespasser), 1,050 yr, 10,000 yr and 100,000 yr (for the Resident Farmer [onsite], Resident Gardener, Recreational User, Outdoor Worker and Offsite Resident Farmer) at the request of the DOE-PPPO. The time horizons of 10,000 yr and 100,000 yr were used at the request of the DOE-PPPO for informational purposes only. The probabilistic peak of the mean dose assessment was performed for the Offsite Resident Farmer using Technetium-99 (Tc-99) and a time horizon of 1,050 yr. The results of the deterministic analyses indicate that among all receptors and time horizons evaluated, the highest projected dose, 2,700 mrem/yr, occurred for the Resident Farmer (onsite) at 12,773 yr. The exposure pathways contributing to the peak dose are ingestion of plants, external gamma, and ingestion of milk, meat and soil. However, this receptor is considered an implausible receptor. The only receptors considered plausible are the Landfill Worker, Recreational User, Outdoor Worker and the Offsite Resident Farmer. The maximum projected dose among the plausible receptors is 220 mrem/yr for the Outdoor Worker and it occurs at 19,045 yr. The exposure pathways contributing to the dose for this receptor are external gamma and soil ingestion. The results of the probabilistic peak of the mean dose analysis for the Offsite Resident Farmer indicate that the average (arithmetic mean) of the peak of the mean doses for this receptor is 0.98 mrem/yr and it occurs at 1,050 yr. This dose corresponds to Tc-99 within the time horizon of 1,050 yr.« less
Mackay, Donald; Hughes, Lauren; Powell, David E; Kim, Jaeshin
2014-09-01
The QWASI fugacity mass balance model has been widely used since 1983 for both scientific and regulatory purposes to estimate the concentrations of organic chemicals in water and sediment, given an assumed rate of chemical emission, advective inflow in water or deposition from the atmosphere. It has become apparent that an updated version is required, especially to incorporate improved methods of obtaining input parameters such as partition coefficients. Accordingly, the model has been revised and it is now available in spreadsheet format. Changes to the model are described and the new version is applied to two chemicals, D5 (decamethylcyclopentasiloxane) and PCB-180, in two lakes, Lake Pepin (MN, USA) and Lake Ontario, showing the model's capability of illustrating both the chemical to chemical differences and lake to lake differences. Since there are now increased regulatory demands for rigorous sensitivity and uncertainty analyses, these aspects are discussed and two approaches are illustrated. It is concluded that the new QWASI water quality model can be of value for both evaluative and simulation purposes, thus providing a tool for obtaining an improved understanding of chemical mass balances in lakes, as a contribution to the assessment of fate and exposure and as a step towards the assessment of risk. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Population-based human exposure models predict the distribution of personal exposures to pollutants of outdoor origin using a variety of inputs, including air pollution concentrations; human activity patterns, such as the amount of time spent outdoors versus indoors, commuting, w...
Impacts of Climate Change on Indirect Human Exposure to Pathogens and Chemicals from Agriculture
Boxall, Alistair B.A.; Hardy, Anthony; Beulke, Sabine; Boucard, Tatiana; Burgin, Laura; Falloon, Peter D.; Haygarth, Philip M.; Hutchinson, Thomas; Kovats, R. Sari; Leonardi, Giovanni; Levy, Leonard S.; Nichols, Gordon; Parsons, Simon A.; Potts, Laura; Stone, David; Topp, Edward; Turley, David B.; Walsh, Kerry; Wellington, Elizabeth M.H.; Williams, Richard J.
2009-01-01
Objective Climate change is likely to affect the nature of pathogens and chemicals in the environment and their fate and transport. Future risks of pathogens and chemicals could therefore be very different from those of today. In this review, we assess the implications of climate change for changes in human exposures to pathogens and chemicals in agricultural systems in the United Kingdom and discuss the subsequent effects on health impacts. Data sources In this review, we used expert input and considered literature on climate change; health effects resulting from exposure to pathogens and chemicals arising from agriculture; inputs of chemicals and pathogens to agricultural systems; and human exposure pathways for pathogens and chemicals in agricultural systems. Data synthesis We established the current evidence base for health effects of chemicals and pathogens in the agricultural environment; determined the potential implications of climate change on chemical and pathogen inputs in agricultural systems; and explored the effects of climate change on environmental transport and fate of different contaminant types. We combined these data to assess the implications of climate change in terms of indirect human exposure to pathogens and chemicals in agricultural systems. We then developed recommendations on future research and policy changes to manage any adverse increases in risks. Conclusions Overall, climate change is likely to increase human exposures to agricultural contaminants. The magnitude of the increases will be highly dependent on the contaminant type. Risks from many pathogens and particulate and particle-associated contaminants could increase significantly. These increases in exposure can, however, be managed for the most part through targeted research and policy changes. PMID:19440487
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Update on ɛK with lattice QCD inputs
NASA Astrophysics Data System (ADS)
Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon
2018-03-01
We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Fiedler, Thomas M; Ladd, Mark E; Bitz, Andreas K
2017-01-01
The purpose of this work was to perform an RF safety evaluation for a bilateral four-channel transmit/receive breast coil and to determine the maximum permissible input power for which RF exposure of the subject stays within recommended limits. The safety evaluation was done based on SAR as well as on temperature simulations. In comparison to SAR, temperature is more directly correlated with tissue damage, which allows a more precise safety assessment. The temperature simulations were performed by applying three different blood perfusion models as well as two different ambient temperatures. The goal was to evaluate whether the SAR and temperature distributions correlate inside the human body and whether SAR or temperature is more conservative with respect to the limits specified by the IEC. A simulation model was constructed including coil housing and MR environment. Lumped elements and feed networks were modeled by a network co-simulation. The model was validated by comparison of S-parameters and B 1 + maps obtained in an anatomical phantom. Three numerical body models were generated based on 3 Tesla MRI images to conform to the coil housing. SAR calculations were performed and the maximal permissible input power was calculated based on IEC guidelines. Temperature simulations were performed based on the Pennes bioheat equation with the power absorption from the RF simulations as heat source. The blood perfusion was modeled as constant to reflect impaired patients as well as with a linear and exponential temperature-dependent increase to reflect two possible models for healthy subjects. Two ambient temperatures were considered to account for cooling effects from the environment. The simulation model was validated with a mean deviation of 3% between measurement and simulation results. The highest 10 g-averaged SAR was found in lung and muscle tissue on the right side of the upper torso. The maximum permissible input power was calculated to be 17 W. The temperature simulations showed that temperature maximums do not correlate well with the position of the SAR maximums in all considered cases. The body models with an exponential blood perfusion increase did not exceed the temperature limit when an RF power according to the SAR limit was applied; in this case, a higher input power level by up to 73% would be allowed. The models with a constant or linear perfusion exceeded the limit for the local temperature when the local SAR limit was adhered to and would require a decrease in the input power level by up to 62%. The maximum permissible input power was determined based on SAR simulations with three newly generated body models and compared with results from temperature simulations. While SAR calculations are state-of-the-art and well defined as they are based on more or less well-known material parameters, temperature simulations depend strongly on additional material, environmental and physiological parameters. The simulations demonstrated that more consideration needs be made by the MR community in defining the parameters for temperature simulations in order to apply temperature limits instead of SAR limits in the context of MR RF safety evaluations. © 2016 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lekadir, Karim, E-mail: karim.lekadir@upf.edu; Hoogendoorn, Corné; Armitage, Paul
Purpose: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. Methods: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by usingmore » a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. Results: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. Conclusions: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth.« less
Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee
2014-06-16
Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. Copyright © 2014. Published by Elsevier B.V.
Dual side control for inductive power transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron
An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less
Ruiz-Felter, Roxanna; Cooperson, Solaman J; Bedore, Lisa M; Peña, Elizabeth D
2016-07-01
Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. To investigate the influence of age of first exposure to English and the amount of current input-output on phonological accuracy in English and Spanish in early bilingual Spanish-English kindergarteners. Also whether parent and teacher ratings of the children's intelligibility are correlated with phonological accuracy and the amount of experience with each language. Data for 91 kindergarteners (mean age = 5;6 years) were selected from a larger dataset focusing on Spanish-English bilingual language development. All children were from Central Texas, spoke a Mexican Spanish dialect and were learning American English. Children completed a single-word phonological assessment with separate forms for English and Spanish. The assessment was analyzed for segmental accuracy: percentage of consonants and vowels correct and percentage of early-, middle- and late-developing (EML) sounds correct were calculated. Children were more accurate on vowel production than consonant production and showed a decrease in accuracy from early to middle to late sounds. The amount of current input-output explained more of the variance in phonological accuracy than age of first English exposure. Although greater current input-output of a language was associated with greater accuracy in that language, English-dominant children were only significantly more accurate in English than Spanish on late sounds, whereas Spanish-dominant children were only significantly more accurate in Spanish than English on early sounds. Higher parent and teacher ratings of intelligibility in Spanish were correlated with greater consonant accuracy in Spanish, but the same did not hold for English. Higher intelligibility ratings in English were correlated with greater current English input-output, and the same held for Spanish. Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services. © 2016 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Bisson, Marie-Josée; van Heuven, Walter J. B.; Conklin, Kathy; Tunney, Richard J.
2014-01-01
Prior research has reported incidental vocabulary acquisition with complete beginners in a foreign language (FL), within 8 exposures to auditory and written FL word forms presented with a picture depicting their meaning. However, important questions remain about whether acquisition occurs with fewer exposures to FL words in a multimodal situation…
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
Respiratory effects of cigarette smoke, dust, and histamine in newborn rabbits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trippenbach, T.; Kelly, G.
1988-02-01
We studied the respiratory effects of cigarette smoke, 5% histamine aerosol, and dust in unanesthetized 1- to 7-day-old rabbits in a body plethysmograph. Cigarette smoke immediately provoked the animal's arousal and irregular breathing. Histamine and dust had no effect in some of the youngest animals. In others, 5-15 s from the onset of the exposure to either of the two stimuli, respiratory rate increased and the depth of breathing decreased. These changes were more pronounced with age. The fact that effects of dust and aerosol lessened with time of exposure showed adaptation to the stimuli. The age dependence of themore » reflex response was also observed after injection of 50 micrograms of histamine per kilogram into the external jugular vein in anesthetized (50 mg ketamine + 3 mg acepromazine per kg) and tracheostomized rabbits during the 1st wk of life. In 1-day-old animals, a short-lasting excitation was followed by apnea or a prolongation of expiratory phase. Peak amplitude of the diaphragmatic EMG (EMGdi) increased in all animals, but only in the youngest was the EMGdi increase paralleled by an increase in tidal volume. In vagotomized animals or animals pretreated with H1-blocker, histamine never affected timing parameters in animals greater than 1 day old. In the youngest animals, respiratory depression due to histamine was not abolished after vagotomy or promethazine. The results imply that inputs from the upper airways and the rapidly adapting pulmonary mechanoreceptors exert their effects on the pattern of breathing immediately after birth in rabbits. The importance of those inputs increases with maturation.« less
NASA Astrophysics Data System (ADS)
Mishra, H.; Karmakar, S.; Kumar, R.
2016-12-01
Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.
Hristozov, Danail; Zabeo, Alex; Alstrup Jensen, Keld; Gottardo, Stefania; Isigonis, Panagiotis; Maccalman, Laura; Critto, Andrea; Marcomini, Antonio
2016-11-01
Several tools to facilitate the risk assessment and management of manufactured nanomaterials (MN) have been developed. Most of them require input data on physicochemical properties, toxicity and scenario-specific exposure information. However, such data are yet not readily available, and tools that can handle data gaps in a structured way to ensure transparent risk analysis for industrial and regulatory decision making are needed. This paper proposes such a quantitative risk prioritisation tool, based on a multi-criteria decision analysis algorithm, which combines advanced exposure and dose-response modelling to calculate margins of exposure (MoE) for a number of MN in order to rank their occupational risks. We demonstrated the tool in a number of workplace exposure scenarios (ES) involving the production and handling of nanoscale titanium dioxide, zinc oxide (ZnO), silver and multi-walled carbon nanotubes. The results of this application demonstrated that bag/bin filling, manual un/loading and dumping of large amounts of dry powders led to high emissions, which resulted in high risk associated with these ES. The ZnO MN revealed considerable hazard potential in vivo, which significantly influenced the risk prioritisation results. In order to study how variations in the input data affect our results, we performed probabilistic Monte Carlo sensitivity/uncertainty analysis, which demonstrated that the performance of the proposed model is stable against changes in the exposure and hazard input variables.
MIRACAL: A mission radiation calculation program for analysis of lunar and interplanetary missions
NASA Technical Reports Server (NTRS)
Nealy, John E.; Striepe, Scott A.; Simonsen, Lisa C.
1992-01-01
A computational procedure and data base are developed for manned space exploration missions for which estimates are made for the energetic particle fluences encountered and the resulting dose equivalent incurred. The data base includes the following options: statistical or continuum model for ordinary solar proton events, selection of up to six large proton flare spectra, and galactic cosmic ray fluxes for elemental nuclei of charge numbers 1 through 92. The program requires an input trajectory definition information and specifications of optional parameters, which include desired spectral data and nominal shield thickness. The procedure may be implemented as an independent program or as a subroutine in trajectory codes. This code should be most useful in mission optimization and selection studies for which radiation exposure is of special importance.
Ban, Nobuhiko; Takahashi, Fumiaki; Ono, Koji; Hasegawa, Takayuki; Yoshitake, Takayasu; Katsunuma, Yasushi; Sato, Kaoru; Endo, Akira; Kai, Michiaki
2011-07-01
A web-based dose computation system, WAZA-ARI, is being developed for patients undergoing X-ray CT examinations. The system is implemented in Java on a Linux server running Apache Tomcat. Users choose scanning options and input parameters via a web browser over the Internet. Dose coefficients, which were calculated in a Japanese adult male phantom (JM phantom) are called upon user request and are summed over the scan range specified by the user to estimate a normalised dose. Tissue doses are finally computed based on the radiographic exposure (mA s) and the pitch factor. While dose coefficients are currently available only for limited CT scanner models, the system has achieved a high degree of flexibility and scalability without the use of commercial software.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
(PORTUGAL)THE DETROIT EXPOSURE AND AEROSOL RESEARCH STUDY
The Detroit Exposure and Aerosol Research Study (DEARS) represents an intensive examination of personal, residential and community-based particulate matter and related co-pollutant measurements in Detroit, Michigan. Data from the DEARS will be used as inputs into air quality, la...
Noise Exposure Model MOD-5 : Volume 1
DOT National Transportation Integrated Search
1971-06-01
The report contains three sections. The first two sections are contained in Volume 1. It contains an airport analysis which describes the noise exposure model MOD-5 from the perspective of analysing an airport in order to develop the program input mo...
Impacts of Lateral Boundary Conditions on US Ozone ...
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, we perform annual simulations over North America with chemical boundary conditions prepared from two global models (GEOS-CHEM and Hemispheric CMAQ). Results indicate that the impacts of different boundary conditions on ozone can be significant throughout the year. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang
2011-10-01
This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Functional Differences between Statistical Learning with and without Explicit Training
ERIC Educational Resources Information Center
Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…
Desktop Application Program to Simulate Cargo-Air-Drop Tests
NASA Technical Reports Server (NTRS)
Cuthbert, Peter
2009-01-01
The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
Meter circuit for tuning RF amplifiers
NASA Technical Reports Server (NTRS)
Longthorne, J. E.
1973-01-01
Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.; Mills, F. P.
2014-08-01
Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
An Adaptive Resonance Theory account of the implicit learning of orthographic word forms.
Glotin, H; Warnier, P; Dandurand, F; Dufau, S; Lété, B; Touzet, C; Ziegler, J C; Grainger, J
2010-01-01
An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an unordered set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children's exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read. 2009 Elsevier Ltd. All rights reserved.
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Emissions-critical charge cooling using an organic rankine cycle
Ernst, Timothy C.; Nelson, Christopher R.
2014-07-15
The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.
IMPROVING EXPOSURE DATA INPUTS NEEDED TO ASSESS ENVIRONMENTAL RISKS OF OLDER ADULTS
An abstract has been prepared for the 2006 EPA Science forum that describes work to develop a scientific understanding of differential exposures, activities, and dose in aging populations and potentially susceptible subpopulations. When combined with information being developed ...
Master control data handling program uses automatic data input
NASA Technical Reports Server (NTRS)
Alliston, W.; Daniel, J.
1967-01-01
General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.
1998-01-01
C-Arms and Digital Fluorscopy ....... ............................. 6 Image Intensifier III) Input Exposure Rates and Exposures...Setup for ESE Measurements in Conventional Radiography .................. 3 Figure 2: Setup tor IP ;R M easurem ents...exposure from dental procedures. An area of dental radiography which has not been well addressed is the dose received during panoramic or panalipse
NASA Astrophysics Data System (ADS)
Zuclich, Joseph A.
1980-10-01
Ocular effects of ultraviolet radiation, 200-400 nm, are reviewed. Depending upon the exposure parameter involved, UV radiation may be harmful to the cornea, lens and/or retina. Ranges of exposure parameters (wavelength, exposure duration, etc.) for which each of the tissues is susceptible are specified and the nature of the tissue is described. Present understanding of the thermal and photochemical damage mechanism operative for various conditions of exposure are discussed Ocular damage thresholds for wide ranges of exposure parameters are summarized and compared to existing safety standards.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Chen, Ting Y; Zhang, Die; Dragomir, Andrei; Akay, Yasemin; Akay, Metin
2011-05-01
We investigated the influence of nicotine exposure and prefrontal cortex (PFC) transections on ventral tegmental areas (VTA) dopamine (DA) neurons' firing activities using a time-frequency method based on the continuous wavelet transform (CWT). Extracellular single-unit neural activity was recorded from DA neurons in the VTA area of rats. One group had their PFC inputs to the VTA intact, while the other group had the inputs to VTA bilaterally transected immediate caudal to the PFC. We hypothesized that the systemic nicotine exposure will significantly change the energy distribution in the recorded neural activity. Additionally, we investigated whether the loss of inputs to the VTA caused by the PFC transection resulted in the cancellation of the nicotine' effect on the neurons' firing patterns. The time-frequency representations of VTA DA neurons firing activity were estimated from the reconstructed firing rate histogram. The energy contents were estimated from three frequency bands, which are known to encompass the significant modes of operation of DA neurons. Our results show that systemic nicotine exposure disrupts the energy distribution in PFC-intact rats. Particularly, there is a significant increase in energy contents of the 1-1.5 Hz frequency band. This corresponds to an observed increase in the firing rate of VTA DA neurons following nicotine exposure. Additionally, our results from PFC-transected rats show that there is no change in the energy distribution of the recordings after systemic nicotine exposure. These results indicate that the PFC plays an important role in affecting the activities of VTA DA neurons and that the CWT is a useful method for monitoring the changes in neural activity patterns in both time and frequency domains.
Bach, Martin; Diesner, Mirjam; Großmann, Dietlinde; Guerniche, Djamal; Hommen, Udo; Klein, Michael; Kubiak, Roland; Müller, Alexandra; Priegnitz, Jan; Reichenberger, Stefan; Thomas, Kai; Trapp, Matthias
2016-07-01
In 2001, the European Commission introduced a risk assessment project known as FOCUS (FOrum for the Coordination of pesticide fate models and their USe) for the surface water risk assessment of active substances in the European Union. Even for the national authorisation of plant protection products (PPPs), the vast majority of EU member states still refer to the four runoff and six drainage scenarios selected by the FOCUS Surface Water Workgroup. However, our study, as well as the European Food Safety Authority (EFSA), has stated the need for various improvements. Current developments in pesticide exposure assessment mainly relate to two processes. Firstly, predicted environmental concentrations (PECs) of pesticides are calculated by introducing model input variables such as weather conditions, soil properties and substance fate parameters that have a probabilistic nature. Secondly, spatially distributed PECs for soil-climate scenarios are derived on the basis of an analysis of geodata. Such approaches facilitate the calculation of a spatiotemporal cumulative distribution function (CDF) of PECs for a given area of interest and are subsequently used to determine an exposure concentration endpoint as a given percentile of the CDF. For national PPP authorisation, we propose that, in the future, exposure endpoints should be determined from the overall known statistical PEC population for an area of interest, and derived for soil and climate conditions specific to the particular member state. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory
2017-12-23
Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory
Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Program for User-Friendly Management of Input and Output Data Sets
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard
2003-01-01
A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
A European model and case studies for aggregate exposure assessment of pesticides.
Kennedy, Marc C; Glass, C Richard; Bokkers, Bas; Hart, Andy D M; Hamey, Paul Y; Kruisselbrink, Johannes W; de Boer, Waldo J; van der Voet, Hilko; Garthwaite, David G; van Klaveren, Jacob D
2015-05-01
Exposures to plant protection products (PPPs) are assessed using risk analysis methods to protect public health. Traditionally, single sources, such as food or individual occupational sources, have been addressed. In reality, individuals can be exposed simultaneously to multiple sources. Improved regulation therefore requires the development of new tools for estimating the population distribution of exposures aggregated within an individual. A new aggregate model is described, which allows individual users to include as much, or as little, information as is available or relevant for their particular scenario. Depending on the inputs provided by the user, the outputs can range from simple deterministic values through to probabilistic analyses including characterisations of variability and uncertainty. Exposures can be calculated for multiple compounds, routes and sources of exposure. The aggregate model links to the cumulative dietary exposure model developed in parallel and is implemented in the web-based software tool MCRA. Case studies are presented to illustrate the potential of this model, with inputs drawn from existing European data sources and models. These cover exposures to UK arable spray operators, Italian vineyard spray operators, Netherlands users of a consumer spray and UK bystanders/residents. The model could also be adapted to handle non-PPP compounds. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
NASA Astrophysics Data System (ADS)
vellaichamy, Lakshmanan; Paulraj, Sathiya
2018-02-01
The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Evaluating the risk of pathogen transmission from wild animals to domestic pigs in Australia.
Pearson, Hayley E; Toribio, Jenny-Ann L M L; Lapidge, Steven J; Hernández-Jover, Marta
2016-01-01
Wild animals contribute to endemic infection in livestock as well as the introduction, reintroduction and maintenance of pathogens. The source of introduction of endemic diseases to a piggery is often unknown and the extent of wildlife contribution to such local spread is largely unexplored. The aim of the current study was to quantitatively assess the probability of domestic pigs being exposed to different pathogens from wild animals commonly found around commercial piggeries in Australia. Specifically, this study aims to quantify the probability of exposure to the pathogens Escherichia coli, Salmonella spp. and Campylobacter spp. from European starlings (Sturnus vulgarus); Brachyspira hyodysenteriae, Lawsonia intracellularis and Salmonella spp. from rats (Rattus rattus and Rattus norvegicus); and Mycoplasma hyopneumoniae, Leptospira spp., Brucella suis and L. intracellularis from feral pigs (Sus scrofa). Exposure assessments, using scenario trees and Monte Carlo stochastic simulation modelling, were conducted to identify potential pathways of introduction and calculate the probabilities of these pathways occurring. Input parameters were estimated from a national postal survey of commercial pork producers and from disease detection studies conducted for European starlings, rats and feral pigs in close proximity to commercial piggeries in Australia. Based on the results of the exposure assessments, rats presented the highest probability of exposure of pathogens to domestic pigs at any point in time, and L. intracellularis (median 0.13, 5% and 95%, 0.05-0.23) and B. hyodysenteriae (median 0.10, 0.05-0.19) were the most likely pathogens to be transmitted. Regarding European starlings, the median probability of exposure of domestic pigs to pathogenic E. coli at any point in time was estimated to be 0.03 (0.02-0.04). The highest probability of domestic pig exposure to feral pig pathogens at any point in time was found to be for M. hyopneumoniae (median 0.013, 0.007-0.022) and L. intracellularis (median 0.006, 0.003-0.011) for pigs in free-range piggeries. The sensitivity analysis indicates that the presence and number of wild animals around piggeries, their access to piggeries and pig food and water, and, in the case of feral pigs, their proximity to piggeries, are the most influential parameters on the probability of exposure. Findings from this study support identification of mitigation strategies that could be implemented at on-farm and industry level to minimize the exposure risk from European starlings, rats and feral pigs. Copyright © 2015 Elsevier B.V. All rights reserved.
AIRCRAFT REACTOR CONTROL SYSTEM APPLICABLE TO TURBOJET AND TURBOPROP POWER PLANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorker, G.E.
1955-07-19
Control systems proposed for direct cycle nuclear powered aircraft commonly involve control of engine speed, nuclear energy input, and chcmical energy input. A system in which these parameters are controlled by controlling the total energy input, the ratio of nuclear and chemical energy input, and the engine speed is proposed. The system is equally applicable to turbojet or turboprop applications. (auth)
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
Songs as Ambient Language Input in Phonology Acquisition
ERIC Educational Resources Information Center
Au, Terry Kit-fong
2013-01-01
Children cannot learn to speak a language simply from occasional noninteractive exposure to native speakers' input (e.g., by hearing television dialogues), but can they learn something about its phonology? To answer this question, the present study varied ambient hearing experience for 126 5- to 7-year-old native Cantonese-Chinese speakers…
ERIC Educational Resources Information Center
Deutsch, Werner
1979-01-01
The purpose of this study was to determine what effect exposure to linguistic input pertinent to kinship terms and kinship relations has on the acquisition of the meaning of such terms. The subjects were 84 German children living in families, and 84 orphans. (Author/CFM)
NASA Astrophysics Data System (ADS)
McGrath, H.; Stefanakis, E.; Nastev, M.
2016-06-01
Conventional knowledge of the flood hazard alone (extent and frequency) is not sufficient for informed decision-making. The public safety community needs tools and guidance to adequately undertake flood hazard risk assessment in order to estimate respective damages and social and economic losses. While many complex computer models have been developed for flood risk assessment, they require highly trained personnel to prepare the necessary input (hazard, inventory of the built environment, and vulnerabilities) and analyze model outputs. As such, tools which utilize open-source software or are built within popular desktop software programs are appealing alternatives. The recently developed Rapid Risk Evaluation (ER2) application runs scenario based loss assessment analyses in a Microsoft Excel spreadsheet. User input is limited to a handful of intuitive drop-down menus utilized to describe the building type, age, occupancy and the expected water level. In anticipation of local depth damage curves and other needed vulnerability parameters, those from the U.S. FEMA's Hazus-Flood software have been imported and temporarily accessed in conjunction with user input to display exposure and estimated economic losses related to the structure and the content of the building. Building types and occupancies representative of those most exposed to flooding in Fredericton (New Brunswick) were introduced and test flood scenarios were run. The algorithm was successfully validated against results from the Hazus-Flood model for the same building types and flood depths.
NASA Technical Reports Server (NTRS)
Nesbitt, J. A.
1983-01-01
Degradation of NiCrAlZr overlay coatings on various NiCrAl substrates was examined after cyclic oxidation. Concentration/distance profiles were measured in the coating and substrate after various oxidation exposures at 1150 C. For each stubstrate, the Al content in the coating decreased rapidly. The concentration/distance profiles, and particularly that for Al, reflected the oxide spalling resistance of each coated substrate. A numerical model was developed to simulate diffusion associated with overlay-coating degradation by oxidation and coating/substrate interdiffusion. Input to the numerical model consisted of the Cr and Al content of the coating and substrate, ternary diffusivities, and various oxide spalling parameters. The model predicts the Cr and Al concentrations in the coating and substrate after any number of oxidation/thermal cycles. The numerical model also predicts coating failure based on the ability of the coating to supply sufficient Al to the oxide scale. The validity of the model was confirmed by comparison of the predicted and measured concentration/distance profiles. The model was subsequently used to identify the most critical system parameters affecting coating life.
Beyond allostatic load: rethinking the role of stress in regulating human development.
Ellis, Bruce J; Del Giudice, Marco
2014-02-01
How do exposures to stress affect biobehavioral development and, through it, psychiatric and biomedical disorder? In the health sciences, the allostatic load model provides a widely accepted answer to this question: stress responses, while essential for survival, have negative long-term effects that promote illness. Thus, the benefits of mounting repeated biological responses to threat are traded off against costs to mental and physical health. The adaptive calibration model, an evolutionary-developmental theory of stress-health relations, extends this logic by conceptualizing these trade-offs as decision nodes in allocation of resources. Each decision node influences the next in a chain of resource allocations that become instantiated in the regulatory parameters of stress response systems. Over development, these parameters filter and embed information about key dimensions of environmental stress and support, mediating the organism's openness to environmental inputs, and function to regulate life history strategies to match those dimensions. Drawing on the adaptive calibration model, we propose that consideration of biological fitness trade-offs, as delineated by life history theory, is needed to more fully explain the complex relations between developmental exposures to stress, stress responsivity, behavioral strategies, and health. We conclude that the adaptive calibration model and allostatic load model are only partially complementary and, in some cases, support different approaches to intervention. In the long run, the field may be better served by a model informed by life history theory that addresses the adaptive role of stress response systems in regulating alternative developmental pathways.
TREXMO: A Translation Tool to Support the Use of Regulatory Occupational Exposure Models.
Savic, Nenad; Racordon, Dimitri; Buchs, Didier; Gasic, Bojan; Vernez, David
2016-10-01
Occupational exposure models vary significantly in their complexity, purpose, and the level of expertise required from the user. Different parameters in the same model may lead to different exposure estimates for the same exposure situation. This paper presents a tool developed to deal with this concern-TREXMO or TRanslation of EXposure MOdels. TREXMO integrates six commonly used occupational exposure models, namely, ART v.1.5, STOFFENMANAGER(®) v.5.1, ECETOC TRA v.3, MEASE v.1.02.01, EMKG-EXPO-TOOL, and EASE v.2.0. By enabling a semi-automatic translation between the parameters of these six models, TREXMO facilitates their simultaneous use. For a given exposure situation, defined by a set of parameters in one of the models, TREXMO provides the user with the most appropriate parameters to use in the other exposure models. Results showed that, once an exposure situation and parameters were set in ART, TREXMO reduced the number of possible outcomes in the other models by 1-4 orders of magnitude. The tool should manage to reduce the uncertain entry or selection of parameters in the six models, improve between-user reliability, and reduce the time required for running several models for a given exposure situation. In addition to these advantages, registrants of chemicals and authorities should benefit from more reliable exposure estimates for the risk characterization of dangerous chemicals under Regulation, Evaluation, Authorisation and restriction of CHemicals (REACH). © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Effect of Heat Input on Geometry of Austenitic Stainless Steel Weld Bead on Low Carbon Steel
NASA Astrophysics Data System (ADS)
Saha, Manas Kumar; Hazra, Ritesh; Mondal, Ajit; Das, Santanu
2018-05-01
Among different weld cladding processes, gas metal arc welding (GMAW) cladding becomes a cost effective, user friendly, versatile method for protecting the surface of relatively lower grade structural steels from corrosion and/or erosion wear by depositing high grade stainless steels onto them. The quality of cladding largely depends upon the bead geometry of the weldment deposited. Weld bead geometry parameters, like bead width, reinforcement height, depth of penetration, and ratios like reinforcement form factor (RFF) and penetration shape factor (PSF) determine the quality of the weld bead geometry. Various process parameters of gas metal arc welding like heat input, current, voltage, arc travel speed, mode of metal transfer, etc. influence formation of bead geometry. In the current experimental investigation, austenite stainless steel (316) weld beads are formed on low alloy structural steel (E350) by GMAW using 100% CO2 as the shielding gas. Different combinations of current, voltage and arc travel speed are chosen so that heat input increases from 0.35 to 0.75 kJ/mm. Nine number of weld beads are deposited and replicated twice. The observations show that weld bead width increases linearly with increase in heat input, whereas reinforcement height and depth of penetration do not increase with increase in heat input. Regression analysis is done to establish the relationship between heat input and different geometrical parameters of weld bead. The regression models developed agrees well with the experimental data. Within the domain of the present experiment, it is observed that at higher heat input, the weld bead gets wider having little change in penetration and reinforcement; therefore, higher heat input may be recommended for austenitic stainless steel cladding on low alloy steel.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
NASA Astrophysics Data System (ADS)
Prescott, Aaron M.; Abel, Steven M.
2016-12-01
The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Modeling of the dispersion of depleted uranium aerosol.
Mitsakou, C; Eleftheriadis, K; Housiadas, C; Lazaridis, M
2003-04-01
Depleted uranium is a low-cost radioactive material that, in addition to other applications, is used by the military in kinetic energy weapons against armored vehicles. During the Gulf and Balkan conflicts concern has been raised about the potential health hazards arising from the toxic and radioactive material released. The aerosol produced during impact and combustion of depleted uranium munitions can potentially contaminate wide areas around the impact sites or can be inhaled by civilians and military personnel. Attempts to estimate the extent and magnitude of the dispersion were until now performed by complex modeling tools employing unclear assumptions and input parameters of high uncertainty. An analytical puff model accommodating diffusion with simultaneous deposition is developed, which can provide a reasonable estimation of the dispersion of the released depleted uranium aerosol. Furthermore, the period of the exposure for a given point downwind from the release can be estimated (as opposed to when using a plume model). The main result is that the depleted uranium mass is deposited very close to the release point. The deposition flux at a couple of kilometers from the release point is more than one order of magnitude lower than the one a few meters near the release point. The effects due to uncertainties in the key input variables are addressed. The most influential parameters are found to be atmospheric stability, height of release, and wind speed, whereas aerosol size distribution is less significant. The output from the analytical model developed was tested against the numerical model RPM-AERO. Results display satisfactory agreement between the two models.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Daneji, A.; Ali, M.; Pervaiz, S.
2018-04-01
Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.
The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vercesi, J.; Surian, E.
Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less
NASA Technical Reports Server (NTRS)
Cross, P. L.
1994-01-01
Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Chasin, Marshall; Russo, Frank A
2004-01-01
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Femtosecond soliton source with fast and broad spectral tunability.
Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E
2009-03-15
We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
FAST: Fitting and Assessment of Synthetic Templates
NASA Astrophysics Data System (ADS)
Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis
2018-03-01
FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
Universal Responses of Cyclic-Oxidation Models Studied
NASA Technical Reports Server (NTRS)
Smialek, James L.
2003-01-01
Oxidation is an important degradation process for materials operating in the high-temperature air or oxygen environments typical of jet turbine or rocket engines. Reaction of the combustion gases with the component material forms surface layer scales during these oxidative exposures. Typically, the instantaneous rate of reaction is inversely proportional to the existing scale thickness, giving rise to parabolic kinetics. However, more realistic applications entail periodic startup and shutdown. Some scale spallation may occur upon cooling, resulting in loss of the protective diffusion barrier provided by a fully intact scale. Upon reheating, the component will experience accelerated oxidation due to this spallation. Cyclic-oxidation testing has, therefore, been a mainstay of characterization and performance ranking for high-temperature materials. Models simulate this process by calculating how a scale spalls upon cooling and regrows upon heating (refs. 1 to 3). Recently released NASA software (COSP for Windows) allows researchers to specify a uniform layer or discrete segments of spallation (ref. 4). Families of model curves exhibit consistent regularity and trends with input parameters, and characteristic features have been empirically described in terms of these parameters. Although much insight has been gained from experimental and model curves, no equation has been derived that can describe this behavior explicitly as functions of the key oxidation parameters.
Evaluation of trade influence on economic growth rate by computational intelligence approach
NASA Astrophysics Data System (ADS)
Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor
2017-01-01
In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
ERIC Educational Resources Information Center
Bahrani, Taher; Sim, Tam Shu
2012-01-01
In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…
Age and Input in the Acquisition of Grammatical Gender in Dutch
ERIC Educational Resources Information Center
Unsworth, Sharon
2008-01-01
This article investigates the effect of age of first exposure and the quantity and quality of input to which non-native acquirers (L2ers) are exposed in their acquisition of grammatical gender in Dutch. Data from 103 English-speaking children, preteens and adults were analyzed for gender agreement on definite determiners. It was observed that…
EPA Exposure Research and the ExpoCast Project: New Methods and New Data (NIEHS Exposome webinar)
Estimates of human and ecological exposures are required as critical input to risk-based prioritization and screening of thousands of chemicals. In a 2009 commentary in Environmental Health Perspectives, Shelden and Hubal proposed that “Novel statistical and informatic approaches...
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces
NASA Astrophysics Data System (ADS)
Rinker, Jennifer M.
2016-09-01
This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.
An Inverse Analysis Approach to the Characterization of Chemical Transport in Paints
Willis, Matthew P.; Stevenson, Shawn M.; Pearl, Thomas P.; Mantooth, Brent A.
2014-01-01
The ability to directly characterize chemical transport and interactions that occur within a material (i.e., subsurface dynamics) is a vital component in understanding contaminant mass transport and the ability to decontaminate materials. If a material is contaminated, over time, the transport of highly toxic chemicals (such as chemical warfare agent species) out of the material can result in vapor exposure or transfer to the skin, which can result in percutaneous exposure to personnel who interact with the material. Due to the high toxicity of chemical warfare agents, the release of trace chemical quantities is of significant concern. Mapping subsurface concentration distribution and transport characteristics of absorbed agents enables exposure hazards to be assessed in untested conditions. Furthermore, these tools can be used to characterize subsurface reaction dynamics to ultimately design improved decontaminants or decontamination procedures. To achieve this goal, an inverse analysis mass transport modeling approach was developed that utilizes time-resolved mass spectroscopy measurements of vapor emission from contaminated paint coatings as the input parameter for calculation of subsurface concentration profiles. Details are provided on sample preparation, including contaminant and material handling, the application of mass spectrometry for the measurement of emitted contaminant vapor, and the implementation of inverse analysis using a physics-based diffusion model to determine transport properties of live chemical warfare agents including distilled mustard (HD) and the nerve agent VX. PMID:25226346
Assessing and enhancing the utility of low-cost activity and location sensors for exposure studies.
Asimina, Stamatelopoulou; Chapizanis, D; Karakitsios, S; Kontoroupis, P; Asimakopoulos, D N; Maggos, T; Sarigiannis, D
2018-02-20
Nowadays, the advancement of mobile technology in conjunction with the introduction of the concept of exposome has provided new dynamics to the exposure studies. Since the addressing of health outcomes related to environmental stressors is crucial, the improvement of exposure assessment methodology is of paramount importance. Towards this aim, a pilot study was carried out in the two major cities of Greece (Athens, Thessaloniki), investigating the applicability of commercially available fitness monitors and the Moves App for tracking people's location and activities, as well as for predicting the type of the encountered location, using advanced modeling techniques. Within the frame of the study, 21 individuals were using the Fitbit Flex activity tracker, a temperature logger, and the application Moves App on their smartphones. For the validation of the above equipment, participants were also carrying an Actigraph (activity sensor) and a GPS device. The data collected from Fitbit Flex, the temperature logger, and the GPS (speed) were used as input parameters in an Artificial Neural Network (ANN) model for predicting the type of location. Analysis of the data showed that the Moves App tends to underestimate the daily steps counts in comparison with Fitbit Flex and Actigraph, respectively, while Moves App predicted the movement trajectory of an individual with reasonable accuracy, compared to a dedicated GPS. Finally, the encountered location was successfully predicted by the ANN in most of the cases.
Son, Yeongkwon; Osornio-Vargas, Álvaro R; O'Neill, Marie S; Hystad, Perry; Texcalac-Sangrador, José L; Ohman-Strickland, Pamela; Meng, Qingyu; Schwander, Stephan
2018-05-17
The Mexico City Metropolitan Area (MCMA) is one of the largest and most populated urban environments in the world and experiences high air pollution levels. To develop models that estimate pollutant concentrations at fine spatiotemporal scales and provide improved air pollution exposure assessments for health studies in Mexico City. We developed finer spatiotemporal land use regression (LUR) models for PM 2.5 , PM 10 , O 3 , NO 2 , CO and SO 2 using mixed effect models with the Least Absolute Shrinkage and Selection Operator (LASSO). Hourly traffic density was included as a temporal variable besides meteorological and holiday variables. Models of hourly, daily, monthly, 6-monthly and annual averages were developed and evaluated using traditional and novel indices. The developed spatiotemporal LUR models yielded predicted concentrations with good spatial and temporal agreements with measured pollutant levels except for the hourly PM 2.5 , PM 10 and SO 2 . Most of the LUR models met performance goals based on the standardized indices. LUR models with temporal scales greater than one hour were successfully developed using mixed effect models with LASSO and showed superior model performance compared to earlier LUR models, especially for time scales of a day or longer. The newly developed LUR models will be further refined with ongoing Mexico City air pollution sampling campaigns to improve personal exposure assessments. Copyright © 2018. Published by Elsevier B.V.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Wei; Chen, Gaoqiang; Chen, Jian
Reduced-activation ferritic/martensitic (RAFM) steels are an important class of structural materials for fusion reactor internals developed in recent years because of their improved irradiation resistance. However, they can suffer from welding induced property degradations. In this paper, a solid phase joining technology friction stir welding (FSW) was adopted to join a RAFM steel Eurofer 97 and different FSW parameters/heat input were chosen to produce welds. FSW response parameters, joint microstructures and microhardness were investigated to reveal relationships among welding heat input, weld structure characterization and mechanical properties. In general, FSW heat input results in high hardness inside the stir zonemore » mostly due to a martensitic transformation. It is possible to produce friction stir welds similar to but not with exactly the same base metal hardness when using low power input because of other hardening mechanisms. Further, post weld heat treatment (PWHT) is a very effective way to reduce FSW stir zone hardness values.« less
NASA Astrophysics Data System (ADS)
Haller, Julian; Wilkens, Volker
2012-11-01
For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.
WRF/CMAQ AQMEII3 Simulations of US Regional-Scale ...
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary conditions prepared from four different global models. Results indicate that the impacts of different boundary conditions are significant for ozone throughout the year and most pronounced outside the summer season. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Wong, Stephanie K.; Sawit, Simonette T.; Calcagno, Claudia; Maceda, Cynara; Ramachandran, Sarayu; Fayad, Zahi A.; Moline, Jacqueline; McLaughlin, Mary Ann
2013-01-01
In this pilot study, we hypothesize that dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has the potential to evaluate differences in atherosclerosis profiles in patients subjected to high (initial dust cloud) and low (after 13 September 2001) particulate matter (PM) exposure. Exposure to PM may be associated with adverse health effects leading to increased morbidity. Law enforcement workers were exposed to high levels of particulate pollution after working at “Ground Zero” and may exhibit accelerated atherosclerosis. 31 subjects (28 male) with high (n = 19) or low (n = 12) exposure to PM underwent DCE-MRI. Demographics (age, gender, family history, hypertension, diabetes, BMI, and smoking status), biomarkers (lipid profiles, hs-CRP, BP) and ankle-brachial index (ABI) measures (left and right) were obtained from all subjects. Differences between the high and low exposures were compared using independent samples t test. Using linear forward stepwise regression with information criteria model, independent predictors of increased area under curve (AUC) from DCE-MRI were determined using all variables as input. Confidence interval of 95 % was used and variables with p > 0.1 were eliminated. p < 0.05 was considered significant. Subjects with high exposure (HE) had significantly higher DCE-MRI AUC uptake (increased neovascularization) compared to subjects with lower exposure (LE). (AUC: 2.65 ± 0.63 HE vs. 1.88 ± 0.69 LE, p = 0.016). Except for right leg ABI, none of the other parameters were significantly different between the two groups. Regression model indicated that only HE to PM, CRP > 3.0 and total cholesterol were independently associated with increased neovascularization (in decreasing order of importance, all p < 0.026). HE to PM may increase plaque neovascularization, and thereby potentially indicate worsening atherogenic profile of “Ground Zero” workers. PMID:23179748
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Gaussian beam profile shaping apparatus, method therefor and evaluation thereof
Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.
1999-01-01
A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.
Gaussian beam profile shaping apparatus, method therefore and evaluation thereof
Dickey, F.M.; Holswade, S.C.; Romero, L.A.
1999-01-26
A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
NASA Astrophysics Data System (ADS)
Soja, G.; Soja, A.-M.
This study tested the usefulness of extremely simple meteorological models for the prediction of ozone indices. The models were developed with the input parameters of daily maximum temperature and sunshine duration and are based on a data collection period of three years. For a rural environment in eastern Austria, the meteorological and ozone data of three summer periods have been used to develop functions to describe three ozone exposure indices (daily maximum, 7 h mean 9.00-16.00 h, accumulated ozone dose AOT40). Data sets for other years or stations not included in the development of the models were used as test data to validate the performance of the models. Generally, optimized regression models performed better than simplest linear models, especially in the case of AOT40. For the description of the summer period from May to September, the mean absolute daily differences between observed and calculated indices were 8±6 ppb for the maximum half hour mean value, 6±5 ppb for the 7 h mean and 41±40 ppb h for the AOT40. When the parameters were further optimized to describe individual months separately, the mean absolute residuals decreased by ⩽10%. Neural network models did not always perform better than the regression models. This is attributed to the low number of inputs in this comparison and to the simple architecture of these models (2-2-1). Further factorial analyses of those days when the residuals were higher than the mean plus one standard deviation should reveal possible reasons why the models did not perform well on certain days. It was observed that overestimations by the models mainly occurred on days with partly overcast, hazy or very windy conditions. Underestimations more frequently occurred on weekdays than on weekends. It is suggested that the application of this kind of meteorological model will be more successful in topographically homogeneous regions and in rural environments with relatively constant rates of emission and long-range transport of ozone precursors. Under conditions too demanding for advanced physico/chemical models, the presented models may offer useful alternatives to derive ecologically relevant ozone indices directly from meteorological parameters.
Acute effects of Dry Immersion on kinematic characteristics of postural corrective responses
NASA Astrophysics Data System (ADS)
Sayenko, D. G.; Miller, T. F.; Melnik, K. A.; Netreba, A. I.; Khusnutdinova, D. R.; Kitov, V. V.; Tomilovskaya, E. S.; Reschke, M. F.; Gerasimenko, Y. P.; Kozlovskaya, I. B.
2016-04-01
Impairments in balance control are inevitable following exposure to microgravity. However, the role of particular sensory system in postural disorders at different stages of the exposure to microgravity still remains unknown. We used a method called Dry Immersion (DI), as a ground-based model of microgravity, to elucidate the effects of 6-h of load-related afferent inputs on kinematic characteristics of postural corrective responses evoked by pushes to the chest of different intensities during upright standing. The structure of postural corrective responses was altered following exposure to DI, which was manifested by: (1) an increase of the ankle and knee flexion during perturbations of medium intensity, (2) the lack of the compensatory hip extension, as well as diminished knee and ankle flexion with a further increase of the perturbation intensity to submaximal level. We suggest that the lack of weight-bearing increases the reactivity of the balance control system, whereas the ability to scale the responses proportionally to the perturbation intensity decreases. Disrupted neuromuscular coordination of postural corrective responses following DI can be attributed to adaptive neural modifications on the spinal and cortical levels. The present study provides evidence that even a short-term lack of load-related afferent inputs alters kinematic patterns of postural corrective responses, and can result in decreased balance control. Because vestibular input is not primarily affected during the DI exposure, our results indicate that activity and the state of the load-related afferents play critical roles in balance control following real or simulated microgravity.
Evaluation of extra virgin olive oil stability by artificial neural network.
Silva, Simone Faria; Anjos, Carlos Alberto Rodrigues; Cavalcanti, Rodrigo Nunes; Celeghini, Renata Maria dos Santos
2015-07-15
The stability of extra virgin olive oil in polyethylene terephthalate bottles and tinplate cans stored for 6 months under dark and light conditions was evaluated. The following analyses were carried out: free fatty acids, peroxide value, specific extinction at 232 and 270 nm, chlorophyll, L(∗)C(∗)h color, total phenolic compounds, tocopherols and squalene. The physicochemical changes were evaluated by artificial neural network (ANN) modeling with respect to light exposure conditions and packaging material. The optimized ANN structure consists of 11 input neurons, 18 hidden neurons and 5 output neurons using hyperbolic tangent and softmax activation functions in hidden and output layers, respectively. The five output neurons correspond to five possible classifications according to packaging material (PET amber, PET transparent and tinplate can) and light exposure (dark and light storage). The predicted physicochemical changes agreed very well with the experimental data showing high classification accuracy for test (>90%) and training set (>85). Sensitivity analysis showed that free fatty acid content, peroxide value, L(∗)Cab(∗)hab(∗) color parameters, tocopherol and chlorophyll contents were the physicochemical attributes with the most discriminative power. Copyright © 2015 Elsevier Ltd. All rights reserved.
DiMES PMI research at DIII-D in support of ITER and beyond
Rudakov, Dimitry L.; Abrams, Tyler; Ding, Rui; ...
2017-03-27
An overview of recent Plasma-Material Interactions (PMI) research at the DIII-D tokamak using the Divertor Material Evaluation System (DiMES) is presented. The DiMES manipulator allows for exposure of material samples in the lower divertor of DIII-D under well-diagnosed ITER-relevant plasma conditions. Plasma parameters during the exposures are characterized by an extensive diagnostic suite including a number of spectroscopic diagnostics, Langmuir probes, IR imaging, and Divertor Thomson Scattering. Post-mortem measurements of net erosion/deposition on the samples are done by Ion Beam Analysis, and results are modelled by the ERO and REDEP/WBC codes with plasma background reproduced by OEDGE/DIVIMP modelling based onmore » experimental inputs. This article highlights experiments studying sputtering erosion, re-deposition and migration of high-Z elements, mostly tungsten and molybdenum, as well as some alternative materials. Results are generally encouraging for use of high-Z PFCs in ITER and beyond, showing high redeposition and reduced net sputter erosion. Two methods of high-Z PFC surface erosion control, with (i) external electrical biasing and (ii) local gas injection, are also discussed. Furthermore, these techniques may find applications in the future devices.« less
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel
2017-06-01
To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Reservoir computing with a single time-delay autonomous Boolean node
NASA Astrophysics Data System (ADS)
Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.
2015-02-01
We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.
Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Gotseff, Peter
2013-12-01
This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
ERIC Educational Resources Information Center
Campfield, Dorota E.; Murphy, Victoria A.
2017-01-01
This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…
Input Processing of Chinese by "ab initio" Learners
ERIC Educational Resources Information Center
Han, ZhaoHong; Liu, Zehua
2013-01-01
We report on a study of first-exposure learners with different first languages (L1s: English, Japanese) to examine their ability to process input for form and meaning. We used a rich set of tasks to tap respectively into processing, comprehension, imitation, and working memory. We show that there are advantages to having a first language (L1) that…
A simulation study to quantify the impacts of exposure ...
A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Functional Division of Hippocampal Area CA1 Via Modulatory Gating of Entorhinal Cortical Inputs
Ito, Hiroshi T.; Schuman, Erin M.
2013-01-01
The hippocampus receives two streams of information, spatial and nonspatial, via major afferent inputs from the medial (MEC) and lateral entorhinal cortexes (LEC). The MEC and LEC projections in the temporoammonic pathway are topographically organized along the transverse-axis of area CA1. The potential for functional segregation of area CA1, however, remains relatively unexplored. Here, we demonstrated differential novelty-induced c-Fos expression along the transverse-axis of area CA1 corresponding to topographic projections of MEC and LEC inputs. We found that, while novel place exposure induced a uniform c-Fos expression along the transverse-axis of area CA1, novel object exposure primarily activated the distal half of CA1 neurons. In hippocampal slices, we observed distinct presynaptic properties between LEC and MEC terminals, and application of either DA or NE produced a largely selective influence on one set of inputs (LEC). Finally, we demonstrated that differential c-Fos expression along the transverse axis of area CA1 was largely abolished by an antagonist of neuromodulatory receptors, clozapine. Our results suggest that neuromodulators can control topographic TA projections allowing the hippocampus to differentially encode new information along the transverse axis of area CA1. PMID:21240920
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong; Liang, Faming; Yu, Beibei
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less
Generative Representations for Evolving Families of Designs
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2003-01-01
Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.
A physiological pharmacokinetic (PBPK) modeling framework has been established to assess cumulative risk of dose and injury of infants and children to organophosphorus (OP) insecticides from aggregate sources and routes. Exposure inputs were drawn from all reasonable sources, pr...
In recent years, the risk analysis community has broadened its use of complex aggregate and cumulative residential exposure models (e.g., to meet the requirements of the 1996 Food Quality Protection Act). The value of these models is their ability to incorporate a range of input...
There is an urgent need to characterize potential risk to human health and the environment that arises from the manufacture and use of tens of thousands of chemicals. Computational tools and approaches for characterizing and prioritizing exposure are required: to provide input f...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
Particle parameter analyzing system. [x-y plotter circuits and display
NASA Technical Reports Server (NTRS)
Hansen, D. O.; Roy, N. L. (Inventor)
1969-01-01
An X-Y plotter circuit apparatus is described which displays an input pulse representing particle parameter information, that would ordinarily appear on the screen of an oscilloscope as a rectangular pulse, as a single dot positioned on the screen where the upper right hand corner of the input pulse would have appeared. If another event occurs, and it is desired to display this event, the apparatus is provided to replace the dot with a short horizontal line.
Chasin, Marshall; Russo, Frank A.
2004-01-01
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
NASA Astrophysics Data System (ADS)
Foulser-Piggott, R.; Saito, K.; Spence, R.
2012-04-01
Loss estimates produced by catastrophe models are dependent on the quality of the input data, including both the hazard and exposure data. Currently, some of the exposure data input into a catastrophe model is aggregated over an area and therefore an estimate of the risk in this area may have a low level of accuracy. In order to obtain a more detailed and accurate loss estimate, it is necessary to have higher resolution exposure data. However, high resolution exposure data is not commonly available worldwide and therefore methods to infer building distribution and characteristics at higher resolution from existing information must be developed. This study is focussed on the development of disaggregation methodologies for exposure data which, if implemented in current catastrophe models, would lead to improved loss estimates. The new methodologies developed for disaggregating exposure data make use of GIS, remote sensing and statistical techniques. The main focus of this study is on earthquake risk, however the methods developed are modular so that they may be applied to different hazards. A number of different methods are proposed in order to be applicable to different regions of the world which have different amounts of data available. The new methods give estimates of both the number of buildings in a study area and a distribution of building typologies, as well as a measure of the vulnerability of the building stock to hazard. For each method, a way to assess and quantify the uncertainties in the methods and results is proposed, with particular focus on developing an index to enable input data quality to be compared. The applicability of the methods is demonstrated through testing for two study areas, one in Japan and the second in Turkey, selected because of the occurrence of recent and damaging earthquake events. The testing procedure is to use the proposed methods to estimate the number of buildings damaged at different levels following a scenario earthquake event. This enables the results of the models to be compared with real data and the relative performance of the different methodologies to be evaluated. A sensitivity analysis is also conducted for two main reasons. Firstly, to determine the key input variables in the methodology that have the most significant impact on the resulting loss estimate. Secondly, to enable the uncertainty in the different approaches to be quantified and therefore provide a range of uncertainty in the loss estimates.
Atmospheric Science Data Center
2017-01-13
... grid. Model inputs of cloud amounts and other atmospheric state parameters are also available in some of the data sets. Primary inputs to ... Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. SRB products are reformatted for the use of ...
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Crotta, M; Limon, G; Blake, D P; Guitian, J
2017-11-16
Toxoplasma gondii is recognized as a widely prevalent zoonotic parasite worldwide. Although several studies clearly identified meat products as an important source of T. gondii infections in humans, quantitative understanding of the risk posed to humans through the food chain is surprisingly scant. While probabilistic risk assessments for pathogens such as Campylobacter jejuni, Listeria monocytogenes or Escherichia coli have been well established, attempts to quantify the probability of human exposure to T. gondii through consumption of food products of animal origin are at early stages. The biological complexity of the life cycle of T. gondii and limited understanding of several fundamental aspects of the host/parasite interaction, require the adoption of numerous critical assumptions and significant simplifications. In this study, we present a hypothetical quantitative model for the assessment of human exposure to T. gondii through meat products. The model has been conceptualized to capture the dynamics leading to the presence of parasite in meat and, for illustrative purposes, used to estimate the probability of at least one viable cyst occurring in 100g of fresh pork meat in England. Available data, including the results of a serological survey in pigs raised in England were used as a starting point to implement a probabilistic model and assess the fate of the parasite along the food chain. Uncertainty distributions were included to describe and account for the lack of knowledge where necessary. To quantify the impact of the key model inputs, sensitivity and scenario analyses were performed. The overall probability of 100g of a hypothetical edible tissue containing at least 1 cyst was 5.54%. Sensitivity analysis indicated that the variables exerting the greater effect on the output mean were the number of cysts and number of bradyzoites per cyst. Under the best and the worst scenarios, the probability of a single portion of fresh pork meat containing at least 1 viable cyst resulted 1.14% and 9.97% indicating that the uncertainty and lack of data surrounding key input parameters of the model preclude accurate estimation of T. gondii exposure through consumption of meat products. The hypothetical model conceptualized here is coherent with current knowledge of the biology of the parasite. Simulation outputs clearly identify the key gaps in our knowledge of the host-parasite interaction that, when filled, will support quantitative assessments and much needed accurate estimates of the risk of human exposure. Copyright © 2017 Elsevier B.V. All rights reserved.
Power accounting of plasma discharges in the linear device Proto-MPEX
NASA Astrophysics Data System (ADS)
Showers, M.; Piotrowicz, P. A.; Beers, C. J.; Biewer, T. M.; Caneses, J.; Canik, J.; Caughman, J. B. O.; Donovan, D. C.; Goulding, R. H.; Lumsdaine, A.; Kafle, N.; Owen, L. W.; Rapp, J.; Ray, H.
2018-06-01
Plasma material interaction (PMI) studies are crucial to the successful development of future fusion reactors. Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a prototype design for the MPEX, a steady-state linear device being developed to study PMI. The primary purpose of Proto-MPEX is developing the plasma heating source concepts for MPEX. A power accounting study of Proto-MPEX works to identify machine operating parameters that could improve its performance, thereby increasing its PMI research capabilities, potentially impacting the MPEX design concept. To build a comprehensive power balance, an analysis of the helicon region has been performed implementing a diagnostic suite and software modeling to identify mechanisms and locations of heat loss from the main plasma. Of the 106.3 kW of input power, up to 90.5% of the power has been accounted for in the helicon region. When the analysis was extended to encompass the device to its end plates, 49.2% of the input power was accounted for and verified diagnostically. Areas requiring further diagnostic analysis are identified. The required improvements will be implemented in future work. The data acquisition and analysis processes will be streamlined to form a working model for future power balance studies of Proto-MPEX. ).
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Assessing contributory risk using economic input-output life-cycle analysis.
Miller, Ian; Shelly, Michael; Jonmaire, Paul; Lee, Richard V; Harbison, Raymond D
2005-04-01
The contribution of consumer purchases of non-essential products to environmental pollution is characterized. Purchase decisions by consumers induce a complex sequence of economy-wide production interactions that influence the production and consumption of chemicals and subsequent exposure and possible public health risks. An economic input-output life-cycle analysis (EIO-LCA) was used to link resource consumption and production by manufacturers to corresponding environmental impacts. Using the US Department of Commerce's input-output tables together with the US Environmental Protection Agency's Toxics Release Inventory and AIRData databases, the economy-wide air discharges resulting from purchases of household appliances, motor homes, and games and toys were quantified. The economic and environmental impacts generated from a hypothetical 10,000 US dollar purchase for selected consumer items were estimated. The analysis shows how purchases of seemingly benign consumer products increase the output of air pollutants along the supply chain and contribute to the potential risks associated with environmental chemical exposures to both consumers and non-consumers alike.
Evaluation of FEM engineering parameters from insitu tests
DOT National Transportation Integrated Search
2001-12-01
The study looked critically at insitu test methods (SPT, CPT, DMT, and PMT) as a means for developing finite element constitutive model input parameters. The first phase of the study examined insitu test derived parameters with laboratory triaxial te...
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Enhancement of CFD validation exercise along the roof profile of a low-rise building
NASA Astrophysics Data System (ADS)
Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.
2018-04-01
The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1996-01-01
This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.
NASA Astrophysics Data System (ADS)
Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta
2013-11-01
A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
Relationship Between Landscape Character, UV Exposure, and Amphibian Decline
NASA Astrophysics Data System (ADS)
O'Reilly, C. M.; Brooks, P. D.; Corn, P. S.; Muths, E.; Campbell, D. H.; Diamond, S.; Tonnessen, K.
2001-12-01
Widespread reports of amphibian declines have been considered a warning of large-scale environmental degradation, yet the reasons for these declines remain unclear. This study suggests that exposure to ultraviolet radiation may act as an environmental stressor that affects population breeding success or susceptibility to disease. Ultraviolet radiation is attenuated by dissolved and particulate compounds in water, which may be of either terrestrial or aquatic origin. UV attenuation by dissolved organic carbon (DOC) is primarily due to compounds in the fulvic acid fraction, which originate in soil environments. These terrestrially-derived fulvic acids are transported to during hydrologic flushing events such as snowmelt and episodic precipitation and play an important role in controlling UV exposure in surface waters. As part of a previously published project, amphibian surveys were conducted at seventeen sites in Rocky Mountain National Park both during, and subsequent to, a three-year drought (1988 - 1990). During this period, ten sites lost one amphibian species, while only one site gained a previously unreported species. One possible explanation for these localized species losses is increased exposure to UV radiation, mediated by reduced terrestrial DOC inputs during dry periods. Several subsequent years of water chemistry data showed that the sites with documented species losses were characterized by a range of DOC concentrations, but tended to have a greater proportion of terrestrial DOC than sites that did not undergo species loss. This suggests that terrestrial inputs exert a strong control on DOC concentrations that may influence species success. We used physical environmental factors to develop a classification scheme for these sites. There are many physical factors that can influence terrestrial DOC inputs, including landscape position, geomorphology, soil type, and watershed vegetation. In addition, we considered the possible effects on internal aquatic inputs, such as nutrient status, food web composition, and aquatic vegetation. Finally, we examined other sites in Rocky Mountain National Park to determine their susceptibility to species loss.
Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.
Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark
2008-04-01
To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.
Schmid, Gernot; Bolz, Thomas; Uberbacher, Richard; Escorihuela-Navarro, Ana; Bahr, Achim; Dorn, Hans; Sauter, Cornelia; Eggert, Torsten; Danker-Hopfe, Heidi
2012-10-01
A new head exposure system for double-blind provocation studies investigating possible effects of terrestrial trunked radio (TETRA)-like exposure (385 MHz) on central nervous processes was developed and dosimetrically analyzed. The exposure system allows localized exposure in the temporal brain, similar to the case of operating a TETRA handset at the ear. The system and antenna concept enables exposure during wake and sleep states while an electroencephalogram (EEG) is recorded. The dosimetric assessment and uncertainty analysis yield high efficiency of 14 W/kg per Watt of accepted antenna input power due to an optimized antenna directly worn on the subject's head. Beside sham exposure, high and low exposure at 6 and 1.5 W/kg (in terms of maxSAR10g in the head) were implemented. Double-blind control and monitoring of exposure is enabled by easy-to-use control software. Exposure uncertainty was rigorously evaluated using finite-difference time-domain (FDTD)-based computations, taking into account anatomical differences of the head, the physiological range of the dielectric tissue properties including effects of sweating on the antenna, possible influences of the EEG electrodes and cables, variations in antenna input reflection coefficients, and effects on the specific absorption rate (SAR) distribution due to unavoidable small variations in the antenna position. This analysis yielded a reasonable uncertainty of <±45% (max to min ratio of 4.2 dB) in terms of maxSAR10g in the head and a variability of <±60% (max to min ratio of 6 dB) in terms of mass-averaged SAR in different brain regions, as demonstrated by a brain region-specific absorption analysis. Copyright © 2012 Wiley Periodicals, Inc.
The impact of input quality on early sign development in native and non-native language learners.
Lu, Jenny; Jones, Anna; Morgan, Gary
2016-05-01
There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the impact of quality of input on early sign acquisition. The current study explores the outcomes of differential input in two groups of children aged two to five years: deaf children of hearing parents (DCHP) and deaf children of deaf parents (DCDP). Analysis of child sign language revealed DCDP had a more developed vocabulary and more phonological handshape types compared with DCHP. In naturalistic conversations deaf parents used more sign tokens and more phonological types than hearing parents. Results are discussed in terms of the effects of early input on subsequent language abilities.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
An in-premise model for Legionella exposure during showering events.
Schoen, Mary E; Ashbolt, Nicholas J
2011-11-15
An exposure model was constructed to predict the critical Legionella densities in an engineered water system that result in infection from inhalation of aerosols containing the pathogen while showering. The model predicted the Legionella densities in the shower air, water and in-premise plumbing biofilm that might result in a deposited dose of Legionella in the alveolar region of the lungs associated with infection for a routine showering event. Processes modeled included the detachment of biofilm-associated Legionella from the in-premise plumbing biofilm during a showering event, the partitioning of the pathogen from the shower water to the air, and the inhalation and deposition of particles in the lungs. The range of predicted critical Legionella densities in the air and water was compared to the available literature. The predictions were generally within the limited set of observations for air and water, with the exception of Legionella density within in-premise plumbing biofilms, for which there remains a lack of observations for comparison. Sensitivity analysis of the predicted results to possible changes in the uncertain input parameters identified the target deposited dose associated with infections, the pathogen air-water partitioning coefficient, and the quantity of detached biofilm from in-premise pluming surfaces as important parameters for additional data collection. In addition, the critical density of free-living protozoan hosts in the biofilm required to propagate the infectious Legionella was estimated. Together, this evidence can help to identify critical conditions that might lead to infection derived from pathogens within the biofilms of any plumbing system from which humans may be exposed to aerosols. Published by Elsevier Ltd.
Engine control techniques to account for fuel effects
Kumar, Shankar; Frazier, Timothy R.; Stanton, Donald W.; Xu, Yi; Bunting, Bruce G.; Wolf, Leslie R.
2014-08-26
A technique for engine control to account for fuel effects including providing an internal combustion engine and a controller to regulate operation thereof, the engine being operable to combust a fuel to produce an exhaust gas; establishing a plurality of fuel property inputs; establishing a plurality of engine performance inputs; generating engine control information as a function of the fuel property inputs and the engine performance inputs; and accessing the engine control information with the controller to regulate at least one engine operating parameter.
Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual
1988-12-01
corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21
Miklos, David B; Hartl, Rebecca; Michel, Philipp; Linden, Karl G; Drewes, Jörg E; Hübner, Uwe
2018-06-01
This study investigated the removal of 15 trace organic chemicals (TOrCs) occurring at ambient concentrations from municipal wastewater treatment plant effluent by advanced oxidation using UV/H 2 O 2 at pilot-scale. Pseudo first-order rate constants (k obs ) for photolytic as well as combined oxidative and photolytic degradation observed at pilot-scale were validated with results from a bench-scale collimated beam device. No significant difference was determined between pilot- and lab-scale performance. During continuous pilot-scale operation at constant UV fluence of 800 mJ/cm 2 and H 2 O 2 dosage of 10 mg/L, the removal of various TOrCs was investigated. The average observed removal for photo-susceptible (k UV >10 -3 cm 2 /mJ; like diclofenac, iopromide and sulfamethoxazole), moderately photo-susceptible (10 -4
Input Processing at First Exposure to a Sign Language
ERIC Educational Resources Information Center
Ortega, Gerardo; Morgan, Gary
2015-01-01
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…
Vegetative leaf area is a critical input to models that simulate human and ecosystem exposure to atmospheric pollutants. Leaf area index (LAI) can be measured in the field or numerically simulated, but all contain some inherent uncertainty that is passed to the exposure assessmen...
NASA Technical Reports Server (NTRS)
Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.
Saouter, Erwan; Aschberger, Karin; Fantke, Peter; Hauschild, Michael Z; Kienzler, Aude; Paini, Alicia; Pant, Rana; Radovnikovic, Anita; Secchi, Michela; Sala, Serenella
2017-12-01
The scientific consensus model USEtox ® has been developed since 2003 under the auspices of the United Nations Environment Programme-Society of Environmental Toxicology and Chemistry Life Cycle Initiative as a harmonized approach for characterizing human and freshwater toxicity in life cycle assessment and other comparative assessment frameworks. Using physicochemical substance properties, USEtox quantifies potential human toxicity and freshwater ecotoxicity impacts by combining environmental fate, exposure, and toxicity effects information, considering multimedia fate and multipathway exposure processes. The main source to obtain substance properties for USEtox 1.01 and 2.0 is the Estimation Program Interface (EPI Suite™) from the US Environmental Protection Agency. However, since the development of the original USEtox substance databases, new chemical regulations have been enforced in Europe, such as the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) and the Plant Protection Products regulations. These regulations require that a chemical risk assessment for humans and the environment is performed before a chemical is placed on the European market. Consequently, additional physicochemical property data and new toxicological endpoints are now available for thousands of chemical substances. The aim of the present study was to explore the extent to which the new available data can be used as input for USEtox-especially for application in environmental footprint studies-and to discuss how this would influence the quantification of fate and exposure factors. Initial results show that the choice of data source and the parameters selected can greatly influence fate and exposure factors, leading to potentially different rankings and relative contributions of substances to overall human toxicity and ecotoxicity impacts. Moreover, it is crucial to discuss the relevance of the exposure factor for freshwater ecotoxicity impacts, particularly for persistent highly adsorbing and bioaccumulating substances. Environ Toxicol Chem 2017;36:3463-3470. © 2017 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC.
DOT National Transportation Integrated Search
2009-02-01
The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the d...
McNamara, C; Naddy, B; Rohan, D; Sexton, J
2003-10-01
The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Effect of Burnishing Parameters on Surface Finish
NASA Astrophysics Data System (ADS)
Shirsat, Uddhav; Ahuja, Basant; Dhuttargaon, Mukund
2017-08-01
Burnishing is cold working process in which hard balls are pressed against the surface, resulting in improved surface finish. The surface gets compressed and then plasticized. This is a highly finishing process which is becoming more popular. Surface quality of the product improves its aesthetic appearance. The product made up of aluminum material is subjected to burnishing process during which kerosene is used as a lubricant. In this study factors affecting burnishing process such as burnishing force, speed, feed, work piece diameter and ball diameter are considered as input parameters while surface finish is considered as an output parameter In this study, experiments are designed using 25 factorial design in order to analyze the relationship between input and output parameters. The ANOVA technique and F-test are used for further analysis.
Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M
2008-01-01
Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.
Expert System for ASIC Imaging
NASA Astrophysics Data System (ADS)
Gupta, Shri N.; Arshak, Khalil I.; McDonnell, Pearse; Boyce, Conor; Duggan, Andrew
1989-07-01
With the developments in the techniques of artificial intelligence over the last few years, development of advisory, scheduling and similar class of problems has become very convenient using tools such as PROLOG. In this paper an expert system has been described which helps lithographers and process engineers in several ways. The methodology used is to model each work station according to its input, output and control parameters, combine these work stations in a logical sequence based on past experience and work out process schedule for a job. In addition, all the requirements vis-a-vis a particular job parameters are converted into decision rules. One example is the exposure time, develop time for a wafer with different feature sizes would be different. This expert system has been written in Turbo Prolog. By building up a large number of rules, one can tune the program to any facility and use it for as diverse applications as advisory help, trouble shooting etc. Leitner (1) has described an advisory expert system that is being used at National Semiconductor. This system is quite different from the one being reported in the present paper. The approach is quite different for one. There is stress on job flow and process for another.
Calibration under uncertainty for finite element models of masonry monuments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin
2010-02-01
Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, andmore » there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.« less
NASA Astrophysics Data System (ADS)
Yan, Zilin; Kim, Yongtae; Hara, Shotaro; Shikazono, Naoki
2017-04-01
The Potts Kinetic Monte Carlo (KMC) model, proven to be a robust tool to study all stages of sintering process, is an ideal tool to analyze the microstructure evolution of electrodes in solid oxide fuel cells (SOFCs). Due to the nature of this model, the input parameters of KMC simulations such as simulation temperatures and attempt frequencies are difficult to identify. We propose a rigorous and efficient approach to facilitate the input parameter calibration process using artificial neural networks (ANNs). The trained ANN reduces drastically the number of trial-and-error of KMC simulations. The KMC simulation using the calibrated input parameters predicts the microstructures of a La0.6Sr0.4Co0.2Fe0.8O3 cathode material during sintering, showing both qualitative and quantitative congruence with real 3D microstructures obtained by focused ion beam scanning electron microscopy (FIB-SEM) reconstruction.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
Du, Zhengjian; Mo, Jinhan; Zhang, Yinping
2014-12-01
Over the past three decades, China has experienced rapid urbanization. The risks to its urban population posed by inhalation exposure to hazardous air pollutants (HAPs) have not been well characterized. Here, we summarize recent measurements of 16 highly prevalent HAPs in urban China and compile their distribution inputs. Based on activity patterns of urban Chinese working adults, we derive personal exposures. Using a probabilistic risk assessment method, we determine cancer and non-cancer risks for working females and males. We also assess the uncertainty associated with risk estimates using Monte Carlo simulation, accounting for variations in HAP concentrations, cancer potency factors (CPFs) and inhalation rates. Average total lifetime cancer risks attributable to HAPs are 2.27×10(-4) (2.27 additional cases per 10,000 people exposed) and 2.93×10(-4) for Chinese urban working females and males, respectively. Formaldehyde, 1,4-dichlorobenzene, benzene and 1,3-butadiene are the major risk contributors yielding the highest median cancer risk estimates, >1×10(-5). About 70% of the risk is due to exposures occurring in homes. Outdoor sources contribute most to the risk of benzene, ethylbenzene and carbon tetrachloride, while indoor sources dominate for all other compounds. Chronic exposure limits are not exceeded for non-carcinogenic effects, except for formaldehyde. Risks are overestimated if variation is not accounted for. Sensitivity analyses demonstrate that the major contributors to total variance are range of inhalation rates, CPFs of formaldehyde, 1,4-dichlorobenzene, benzene and 1,3-butadiene, and indoor home concentrations of formaldehyde and benzene. Despite uncertainty, risks exceeding the acceptable benchmark of 1×10(-6) suggest actions to reduce exposures. Future efforts should be directed toward large-scale measurements of air pollutant concentrations, refinement of CPFs and investigation of population exposure parameters. The present study is a first effort to estimate carcinogenic and non-carcinogenic risks of inhalation exposure to HAPs for the large working populations of Chinese cites. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mimicking natural systems: Changes in behavior as a result of dynamic exposure to naproxen.
Neal, Alexandra E; Moore, Paul A
2017-01-01
Animals living in aquatic habitats regularly encounter anthropogenic chemical pollution. Typically, the toxicity of a chemical toxicant is determined by the median lethal concentration (LC 50 ) through a static exposure test. However, LC 50 values and static tests do not provide an accurate representation of exposure to pollutants within natural stream systems. In their native habitats, animals experience exposure as a fluctuating concentration due to turbulent mixing, temporal variations of contamination (seasonal inputs), and contaminant input type (point vs. non-point). Research has shown that turbulent environments produce exposures with a high degree of fluctuation in frequency, duration, and intensity. In order to more effectively evaluate the effects of pollutants, we created a dynamic exposure paradigm, utilizing both flow and substrate within a small mesocosm. A commonly used pharmaceutical, naproxen, was used as the toxicant and female crayfish (Orconectes virilis) as the target organism to investigate changes in fighting behavior as a result of dynamic exposure. Crayfish underwent either a 23h long static or a dynamic exposure to naproxen. Following exposure, the target crayfish and an unexposed size matched opponent underwent a 15min fight trial. These fight trials were recorded and later analyzed using a standard ethogram. Results indicate that exposure to sublethal concentrations of naproxen, in both static and flowing conditions, negatively impact aggressive behavior. Results also indicate that a dynamic exposure paradigm has a greater negative impact on behavior than a static exposure. Turbulence and habitat structure play important roles in shaping chemical exposure. Future research should incorporate features of dynamic chemical exposure in order to form a more comprehensive image of chemical exposure and predict the resulting sublethal effects from exposure. Possible techniques for assessment include utilizing flow-through experimental set-ups in tandem with behavioral or physiological endpoints as opposed to acute toxicity. Other possibilities of assessment could involve utilizing fine-scale chemical measurements of pollutants to determine the actual concentrations animals encounter during an exposure event. Copyright © 2016 Elsevier Inc. All rights reserved.
Uncertainty in predictions of oil spill trajectories in a coastal zone
NASA Astrophysics Data System (ADS)
Sebastião, P.; Guedes Soares, C.
2006-12-01
A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Dosimetry of a set-up for the exposure of newborn mice to 2.45-GHZ WiFi frequencies.
Pinto, R; Lopresto, V; Galloni, P; Marino, C; Mancini, S; Lodato, R; Pioli, C; Lovisolo, G A
2010-08-01
This work describes the dosimetry of a two waveguide cell system designed to expose newborn mice to electromagnetic fields associated with wireless fidelity signals in the frequency band of 2.45 GHz. The dosimetric characterisation of the exposure system was performed both numerically and experimentally. Specific measures were adopted with regard to the increase in both weight and size of the biological target during the exposure period. The specific absorption rate (SAR, W kg(-1)) for 1 W of input power vs. weight curve was assessed. The curve evidenced an SAR pattern varying from <1 W kg(-1) to >6 W kg(-1) during the first 5 weeks of the life of mice, with a peak resonance phenomenon at a weight around 5 g. This curve was used to set the appropriate level of input power during experimental sessions to expose the growing mice to a defined and constant dose.
Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo
2016-01-01
This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.
Huang, Yize; Jivraj, Jamil; Zhou, Jiaqi; Ramjist, Joel; Wong, Ronnie; Gu, Xijia; Yang, Victor X D
2016-07-25
A surgical laser soft tissue ablation system based on an adjustable 1942 nm single-mode all-fiber Tm-doped fiber laser operating in pulsed or CW mode with nitrogen assistance is demonstrated. Ex vivo ablation on soft tissue targets such as muscle (chicken breast) and spinal cord (porcine) with intact dura are performed at different ablation conditions to examine the relationship between the system parameters and ablation outcomes. The maximum laser average power is 14.4 W, and its maximum peak power is 133.1 W with 21.3 μJ pulse energy. The maximum CW power density is 2.33 × 106 W/cm2 and the maximum pulsed peak power density is 2.16 × 107 W/cm2. The system parameters examined include the average laser power in CW or pulsed operation mode, gain-switching frequency, total ablation exposure time, and the input gas flow rate. The ablation effects were measured by microscopy and optical coherence tomography (OCT) to evaluate the ablation depth, superficial heat-affected zone diameter (HAZD) and charring diameter (CD). Our results conclude that the system parameters can be tailored to meet different clinical requirements such as ablation for soft tissue cutting or thermal coagulation for future applications of hemostasis.
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1975-01-01
The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Extrapolation of sonic boom pressure signatures by the waveform parameter method
NASA Technical Reports Server (NTRS)
Thomas, C. L.
1972-01-01
The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.
Poças, Maria F; Oliveira, Jorge C; Brandsch, Rainer; Hogg, Timothy
2010-07-01
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico
2015-10-01
Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Sánchez-Martín, L.; Bermejo-Bermejo, V.; García-Torres, L.; Alonso, R.; de la Cruz, A.; Calvete-Sogo, H.; Vallejo, A.
2017-09-01
Increasing tropospheric ozone (O3) and atmospheric nitrogen (N) deposition alter the structure and composition of pastures. These changes could affect N and C compounds in the soil that in turn can influence soil microbial activity and processes involved in the emission of N oxides, methane (CH4) and carbon dioxide (CO2), but these effects have been scarcely studied. Through an open top chamber (OTC) field experiment, the combined effects of both pollutants on soil gas emissions from an annual experimental Mediterranean community were assessed. Four O3 treatments and three different N input levels were considered. Fluxes of nitric (NO) and nitrous (N2O) oxide, CH4 and CO2 were analysed as well as soil mineral N and dissolved organic carbon. Belowground plant parameters like root biomass and root C and N content were also sampled. Ozone strongly increased soil N2O emissions, doubling the cumulative emission through the growing cycle in the highest O3 treatment, while N-inputs enhanced more slightly NO; CH4 and CO2 where not affected. Both N-gases had a clear seasonality, peaking at the start and at the end of the season when pasture physiological activity is minimal; thus, higher microorganism activity occurred when pasture had a low nutrient demand. The O3-induced peak of N2O under low N availability at the end of the growing season was counterbalanced by the high N inputs. These effects were related to the O3 x N significant interaction found for the root-N content in the grass and the enhanced senescence of the community. Results indicate the importance of the belowground processes, where competition between plants and microorganisms for the available soil N is a key factor, for understanding the ecosystem responses to O3 and N.
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
Mitchell, Donald E
2008-01-01
To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.
DOT National Transportation Integrated Search
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
ERIC Educational Resources Information Center
Ruiz-Felter, Roxanna; Cooperson, Solaman J.; Bedore, Lisa M.; Peña, Elizabeth D.
2016-01-01
Background: Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. Aims: To investigate the influence of age of first exposure to English and the amount of current…
Dilda, Valentina; Morris, Tiffany R; Yungher, Don A; MacDougall, Hamish G; Moore, Steven T
2014-01-01
Healthy subjects (N = 10) were exposed to 10-min cumulative pseudorandom bilateral bipolar Galvanic vestibular stimulation (GVS) on a weekly basis for 12 weeks (120 min total exposure). During each trial subjects performed computerized dynamic posturography and eye movements were measured using digital video-oculography. Follow up tests were conducted 6 weeks and 6 months after the 12-week adaptation period. Postural performance was significantly impaired during GVS at first exposure, but recovered to baseline over a period of 7-8 weeks (70-80 min GVS exposure). This postural recovery was maintained 6 months after adaptation. In contrast, the roll vestibulo-ocular reflex response to GVS was not attenuated by repeated exposure. This suggests that GVS adaptation did not occur at the vestibular end-organs or involve changes in low-level (brainstem-mediated) vestibulo-ocular or vestibulo-spinal reflexes. Faced with unreliable vestibular input, the cerebellum reweighted sensory input to emphasize veridical extra-vestibular information, such as somatosensation, vision and visceral stretch receptors, to regain postural function. After a period of recovery subjects exhibited dual adaption and the ability to rapidly switch between the perturbed (GVS) and natural vestibular state for up to 6 months.
NASA Technical Reports Server (NTRS)
Harm, D. L.; Taylor, L. C.; Bloomberg, J. J.
2007-01-01
Virtual environments offer unique training opportunities, particularly for training astronauts and preadapting them to the novel sensory conditions of microgravity. Two unresolved human factors issues in virtual reality (VR) systems are: 1) potential "cybersickness", and 2) maladaptive sensorimotor performance following exposure to VR systems. Interestingly, these aftereffects are often quite similar to adaptive sensorimotor responses observed in astronauts during and/or following space flight. Initial interpretation of novel sensory information may be inappropriate and result in perceptual errors. Active exploratory behavior in a new environment, with resulting feedback and the formation of new associations between sensory inputs and response outputs, promotes appropriate perception and motor control in the new environment. Thus, people adapt to consistent, sustained alterations of sensory input such as those produced by microgravity, unilateral labyrinthectomy and experimentally produced stimulus rearrangements. The purpose of this research was to compare disturbances in sensorimotor coordination produced by dome and head-mounted virtual environment displays and to examine the effects of exposure duration, and repeated exposures to VR systems. The first study examined disturbances in balance control, and the second study examined disturbances in eye-head-hand (EHH) and eye-head coordination.
New generation of hydraulic pedotransfer functions for Europe
Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G
2015-01-01
A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465
Combining control input with flight path data to evaluate pilot performance in transport aircraft.
Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney
2008-11-01
When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Terrestrial Investigation Model, TIM, has several appendices to its user guide. This is the appendix that includes an example input file in its preserved format. Both parameters and comments defining them are included.
Organic and low input farming: Pros and cons for soil health
USDA-ARS?s Scientific Manuscript database
Organic and low input farming practices have both advantages and disadvantages in building soil health and maintaining productivity. Examining the effects of farming practices on soil health parameters can aid in developing whole system strategies that promote sustainability. Application of specific...
A reduced adaptive observer for multivariable systems. [using reduced dynamic ordering
NASA Technical Reports Server (NTRS)
Carroll, R. L.; Lindorff, D. P.
1973-01-01
An adaptive observer for multivariable systems is presented for which the dynamic order of the observer is reduced, subject to mild restrictions. The observer structure depends directly upon the multivariable structure of the system rather than a transformation to a single-output system. The number of adaptive gains is at most the sum of the order of the system and the number of input parameters being adapted. Moreover, for the relatively frequent specific cases for which the number of required adaptive gains is less than the sum of system order and input parameters, the number of these gains is easily determined by inspection of the system structure. This adaptive observer possesses all the properties ascribed to the single-input single-output adpative observer. Like the other adaptive observers some restriction is required of the allowable system command input to guarantee convergence of the adaptive algorithm, but the restriction is more lenient than that required by the full-order multivariable observer. This reduced observer is not restricted to cycle systems.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
META II Complex Systems Design and Analysis (CODA)
2011-08-01
37 3.8.7 Variables, Parameters and Constraints ............................................................. 37 3.8.8 Objective...18 Figure 7: Inputs, States, Outputs and Parameters of System Requirements Specifications ......... 19...Design Rule Based on Device Parameter ....................................................... 57 Figure 35: AEE Device Design Rules (excerpt
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual
NASA Technical Reports Server (NTRS)
Hamil, R. G.; Ferden, S. L.
1977-01-01
The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...
2014-09-03
Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Mark A.; Bigelow, Matthew; Gilkey, Jeff C.
The Super Strypi SWIL is a six degree-of-freedom (6DOF) simulation for the Super Strypi Launch Vehicle that includes a subset of the Super Strypi NGC software (guidance, ACS and sequencer). Aerodynamic and propulsive forces, mass properties, ACS (attitude control system) parameters, guidance parameters and Monte-Carlo parameters are defined in input files. Output parameters are saved to a Matlab mat file.
Low Pathogenic Avian Influenza Exposure Risk Assessment in Australian Commercial Chicken Farms.
Scott, Angela Bullanday; Toribio, Jenny-Ann; Singh, Mini; Groves, Peter; Barnes, Belinda; Glass, Kathryn; Moloney, Barbara; Black, Amanda; Hernandez-Jover, Marta
2018-01-01
This study investigated the pathways of exposure to low pathogenic avian influenza (LPAI) virus among Australian commercial chicken farms and estimated the likelihood of this exposure occurring using scenario trees and a stochastic modeling approach following the World Organization for Animal Health methodology for risk assessment. Input values for the models were sourced from scientific literature and an on-farm survey conducted during 2015 and 2016 among Australian commercial chicken farms located in New South Wales and Queensland. Outputs from the models revealed that the probability of a first LPAI virus exposure to a chicken in an Australian commercial chicken farms from one wild bird at any point in time is extremely low. A comparative assessment revealed that across the five farm types (non-free-range meat chicken, free-range meat chicken, cage layer, barn layer, and free range layer farms), free-range layer farms had the highest probability of exposure (7.5 × 10 -4 ; 5% and 95%, 5.7 × 10 -4 -0.001). The results indicate that the presence of a large number of wild birds on farm is required for exposure to occur across all farm types. The median probability of direct exposure was highest in free-range farm types (5.6 × 10 -4 and 1.6 × 10 -4 for free-range layer and free-range meat chicken farms, respectively) and indirect exposure was highest in non-free-range farm types (2.7 × 10 -4 , 2.0 × 10 -4 , and 1.9 × 10 -4 for non-free-range meat chicken, cage layer, and barn layer farms, respectively). The probability of exposure was found to be lowest in summer for all farm types. Sensitivity analysis revealed that the proportion of waterfowl among wild birds on the farm, the presence of waterfowl in the range and feed storage areas, and the prevalence of LPAI in wild birds are the most influential parameters for the probability of Australian commercial chicken farms being exposed to avian influenza (AI) virus. These results highlight the importance of ensuring good biosecurity on farms to minimize the risk of exposure to AI virus and the importance of continuous surveillance of LPAI prevalence including subtypes in wild bird populations.
Low Pathogenic Avian Influenza Exposure Risk Assessment in Australian Commercial Chicken Farms
Scott, Angela Bullanday; Toribio, Jenny-Ann; Singh, Mini; Groves, Peter; Barnes, Belinda; Glass, Kathryn; Moloney, Barbara; Black, Amanda; Hernandez-Jover, Marta
2018-01-01
This study investigated the pathways of exposure to low pathogenic avian influenza (LPAI) virus among Australian commercial chicken farms and estimated the likelihood of this exposure occurring using scenario trees and a stochastic modeling approach following the World Organization for Animal Health methodology for risk assessment. Input values for the models were sourced from scientific literature and an on-farm survey conducted during 2015 and 2016 among Australian commercial chicken farms located in New South Wales and Queensland. Outputs from the models revealed that the probability of a first LPAI virus exposure to a chicken in an Australian commercial chicken farms from one wild bird at any point in time is extremely low. A comparative assessment revealed that across the five farm types (non-free-range meat chicken, free-range meat chicken, cage layer, barn layer, and free range layer farms), free-range layer farms had the highest probability of exposure (7.5 × 10−4; 5% and 95%, 5.7 × 10−4—0.001). The results indicate that the presence of a large number of wild birds on farm is required for exposure to occur across all farm types. The median probability of direct exposure was highest in free-range farm types (5.6 × 10−4 and 1.6 × 10−4 for free-range layer and free-range meat chicken farms, respectively) and indirect exposure was highest in non-free-range farm types (2.7 × 10−4, 2.0 × 10−4, and 1.9 × 10−4 for non-free-range meat chicken, cage layer, and barn layer farms, respectively). The probability of exposure was found to be lowest in summer for all farm types. Sensitivity analysis revealed that the proportion of waterfowl among wild birds on the farm, the presence of waterfowl in the range and feed storage areas, and the prevalence of LPAI in wild birds are the most influential parameters for the probability of Australian commercial chicken farms being exposed to avian influenza (AI) virus. These results highlight the importance of ensuring good biosecurity on farms to minimize the risk of exposure to AI virus and the importance of continuous surveillance of LPAI prevalence including subtypes in wild bird populations. PMID:29755987
2014-01-01
Background Human health risk assessment from exposure to disinfection by-products (DBPs) during drinking and bathing water vary from country to country as per life expectancy, body mass index, water consumption pattern and individual concentration of DBPs component, etc. Methods Present study considered average direct water intake per person for adult males and females as 4 & 3 L/day, respectively as per Indian literature for risk evaluation from another component of pollutant. While other important factor like average life expectancy, body weight & body surface area for male and female were considered 64 & 67 years, 51.9 & 45.4 Kg and 1.54 & 1.38 m2 respectively as per Indian Council of Medical Research and WHO report. The corresponding lifetime cancer risk of the formed THMs to human beings was estimated by the USEPA and IRIS method as per Indian population. Results The total cancer risk reached 8.99 E-04 and 8.92 E-04 for males and females, respectively, the highest risk from THMs seems to be from the inhalation route followed by ingestion and dermal contacts. Conclusions The multipath way evaluations of lifetime cancer risks for THMs exposure through ingestion, dermal absorption, and inhalation exposure were examined at the highest degree of danger. Results reveals that water containing THMs of the selected water treatment plant of the eastern part of India was unsafe in terms of risk evaluation through inhalation and ingestion, while dermal route of risk was found very close to permissible limit of USEPA. Sensitivity analysis shows that every input parameter is sole responsible for total risk potential, whereas exposure duration playing important role for estimation of total risk. PMID:24872885
Transformation of Galilean satellite parameters to J2000
NASA Astrophysics Data System (ADS)
Lieske, J. H.
1998-09-01
The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.
Loh, Miranda M; Houseman, E Andres; Levy, Jonathan I; Spengler, John D; Bennett, Deborah H
2009-11-01
Many people spend time in stores and restaurants, yet there has been little investigation of the influence of these microenvironments on personal exposure. Relative to the outdoors, transportation, and the home, these microenvironments have high concentrations of several volatile organic compounds (VOCs). We developed a stochastic model to examine the effect of VOC concentrations in these microenvironments on total personal exposure for (1) non-smoking adults working in offices who spend time in stores and restaurants or bars and (2) non-smoking adults who work in these establishments. We also compared the effect of working in a smoking versus non-smoking restaurant or bar. Input concentrations for each microenvironment were developed from the literature whereas time activity inputs were taken from the National Human Activity Patterns Survey. Time-averaged exposures were simulated for 5000 individuals over a weeklong period for each analysis. Mean contributions to personal exposure from non-working time spent in stores and restaurants or bars range from <5% to 20%, depending on the VOC and time-activity patterns. At the 95th percentile of the distribution of the proportion of personal exposure attributable to time spent in stores and restaurants or bars, these microenvironments can be responsible for over half of a person's total exposure to certain VOCs. People working in restaurants or bars where smoking is allowed had the highest fraction of exposure attributable to their workplace. At the median, people who worked in stores or restaurants tended to have 20-60% of their total exposures from time spent at work. These results indicate that stores and restaurants can be large contributors to personal exposure to VOCs for both workers in those establishments and for a subset of people who visit these places, and that incorporation of these non-residential microenvironments can improve models of personal exposure distributions.
Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel
CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less
The Microbiota, Immunoregulation, and Mental Health: Implications for Public Health
Lowry, Christopher A.; Smith, David G.; Siebler, Philip H.; Schmidt, Dominic; Stamper, Christopher E.; Hassell, James E.; Yamashita, Paula S.; Fox, James H.; Reber, Stefan O.; Brenner, Lisa A.; Hoisington, Andrew J.; Postolache, Teodor T.; Kinney, Kerry A.; Marciani, Dante; Hernandez, Mark; Hemmings, Sian M.J.; Malan-Muller, Stefanie; Wright, Kenneth P.; Knight, Rob; Raison, Charles L.; Rook, Graham A.W.
2016-01-01
The hygiene or “Old Friends” hypothesis proposes that the epidemic of inflammatory disease in modern urban societies stems at least in part from reduced exposure to microbes that normally prime mammalian immunoregulatory circuits and suppress inappropriate inflammation. Such diseases include but are not limited to allergies and asthma; we and others have proposed that the markedly reduced exposure to these old friends in modern urban societies may also increase vulnerability to neurodevelopmental disorders and stress-related psychiatric disorders, such as anxiety and affective disorders, where data are emerging in support of inflammation as a risk factor. Here we review recent advances in our understanding of the potential for old friends, including environmental microbial inputs, to modify risk for inflammatory disease, with a focus on neurodevelopmental and psychiatric conditions. We highlight potential mechanisms, involving bacterially-derived metabolites, bacterial antigens, and helminthic antigens, through which these inputs promote immunoregulation. Though findings are encouraging, significant human subjects research is required to evaluate the potential impact of old friends, including environmental microbial inputs, on biological signatures and clinically meaningful mental health prevention and intervention outcomes. PMID:27436048
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monti, Henri; Butt, Ali R; Vazhkudai, Sudharshan S
2010-04-01
Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as possible makes the data vulnerable to storage failures, which may entail re-staging and consequently reduced job throughput. To address this, we present a timely staging framework that uses a combination of job startup time predictions, user-specified intermediate nodes, and decentralized data delivery to coincide input data staging with job start-up.more » By delaying staging to when it is necessary, the exposure to failures and its effects can be reduced. Evaluation using both PlanetLab and simulations based on three years of Jaguar (No. 1 in Top500) job logs show as much as 85.9% reduction in staging times compared to direct transfers, 75.2% reduction in wait time on scratch, and 2.4% reduction in usage/hour.« less
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
Environmental and Occupational Pesticide Exposure and Human Sperm Parameters: A Systematic Review
Martenies, Sheena E.; Perry, Melissa J.
2013-01-01
Of continuing concern are the associations between environmental or occupational exposures to pesticides and semen quality parameters. Prior research has indicated that there may be associations between exposure to pesticides of a variety of classes and decreased sperm health. The intent of this review was to summarize the most recent evidence related to pesticide exposures and commonly used semen quality parameters, including concentration, motility and morphology. The recent literature was searched for studies published between January, 2007 and August, 2012 that focused on environmental or occupational pesticide exposures. Included in the review are 17 studies, 15 of which reported significant associations between exposure to pesticides and semen quality indicators. Two studies also investigated the roles genetic polymorphisms may play in the strength or directions of these associations. Specific pesticides targeted for study included dichlorodiphenyltrichloroethane (DDT), hexachlorocyclohexane (HCH), and abamectin. Pyrethroids and organophosphates were analyzed as classes of pesticides rather than as individual compounds, primarily due to the limitations of exposure assessment techniques. Overall, a majority of the studies reported significant associations between pesticide exposure and sperm parameters. A decrease in sperm concentration was the most commonly reported finding among all of the pesticide classes investigated. Decreased motility was also associated with exposures to each of the pesticide classes, although these findings were less frequent across studies. An association between pesticide exposure and sperm morphology was less clear, with only two studies reporting an association. The evidence presented in this review continues to support the hypothesis that exposures to pesticides at environmentally or occupationally relevant levels may be associated with decreased sperm health. Future work in this area should focus on associations between specific pesticides or metabolic products and sperm quality parameters. Analysis of effects of varying genetic characteristics, especially in genes related to pesticide metabolism, also needs further attention. PMID:23438386
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Processing Oscillatory Signals by Incoherent Feedforward Loops
Zhang, Carolyn; You, Lingchong
2016-01-01
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175
Replacing Fortran Namelists with JSON
NASA Astrophysics Data System (ADS)
Robinson, T. E., Jr.
2017-12-01
Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.
Ecophysiological parameters for Pacific Northwest trees.
Amy E. Hessl; Cristina Milesi; Michael A. White; David L. Peterson; Robert E. Keane
2004-01-01
We developed a species- and location-specific database of published ecophysiological variables typically used as input parameters for biogeochemical models of coniferous and deciduous forested ecosystems in the Western United States. Parameters are based on the requirements of Biome-BGC, a widely used biogeochemical model that was originally parameterized for the...
The input variables for a numerical model of reactive solute transport in groundwater include both transport parameters, such as hydraulic conductivity and infiltration, and reaction parameters that describe the important chemical and biological processes in the system. These pa...
Triantafyllidou, Simoni; Le, Trung; Gallagher, Daniel; Edwards, Marc
2014-01-01
The risk of students to develop elevated blood lead from drinking water consumption at schools was assessed, which is a different approach from predictions of geometric mean blood lead levels. Measured water lead levels (WLLs) from 63 elementary schools in Seattle and 601 elementary schools in Los Angeles were acquired before and after voluntary remediation of water lead contamination problems. Combined exposures to measured school WLLs (first-draw and flushed, 50% of water consumption) and home WLLs (50% of water consumption) were used as inputs to the Integrated Exposure Uptake Biokinetic (IEUBK) model for each school. In Seattle an average 11.2% of students were predicted to exceed a blood lead threshold of 5 μg/dL across 63 schools pre-remediation, but predicted risks at individual schools varied (7% risk of exceedance at a "low exposure school", 11% risk at a "typical exposure school", and 31% risk at a "high exposure school"). Addition of water filters and removal of lead plumbing lowered school WLL inputs to the model, and reduced the predicted risk output to 4.8% on average for Seattle elementary students across all 63 schools. The remnant post-remediation risk was attributable to other assumed background lead sources in the model (air, soil, dust, diet and home WLLs), with school WLLs practically eliminated as a health threat. Los Angeles schools instead instituted a flushing program which was assumed to eliminate first-draw WLLs as inputs to the model. With assumed benefits of remedial flushing, the predicted average risk of students to exceed a BLL threshold of 5 μg/dL dropped from 8.6% to 6.0% across 601 schools. In an era with increasingly stringent public health goals (e.g., reduction of blood lead safety threshold from 10 to 5 μg/dL), quantifiable health benefits to students were predicted after water lead remediation at two large US school systems. © 2013.
Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G
2015-11-01
Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.
Gedamke, Jason; Gales, Nick; Frydman, Sascha
2011-01-01
The potential for seismic airgun "shots" to cause acoustic trauma in marine mammals is poorly understood. There are just two empirical measurements of temporary threshold shift (TTS) onset levels from airgun-like sounds in odontocetes. Considering these limited data, a model was developed examining the impact of individual variability and uncertainty on risk assessment of baleen whale TTS from seismic surveys. In each of 100 simulations: 10000 "whales" are assigned TTS onset levels accounting for: inter-individual variation; uncertainty over the population's mean; and uncertainty over weighting of odontocete data to obtain baleen whale onset levels. Randomly distributed whales are exposed to one seismic survey passage with cumulative exposure level calculated. In the base scenario, 29% of whales (5th/95th percentiles of 10%/62%) approached to 1-1.2 km range were exposed to levels sufficient for TTS onset. By comparison, no whales are at risk outside 0.6 km when uncertainty and variability are not considered. Potentially "exposure altering" parameters (movement, avoidance, surfacing, and effective quiet) were also simulated. Until more research refines model inputs, the results suggest a reasonable likelihood that whales at a kilometer or more from seismic surveys could potentially be susceptible to TTS and demonstrate that the large impact uncertainty and variability can have on risk assessment.
Mercury Biogeochemical Cycling in the Ocean and Policy Implications
Mason, Robert P.; Choi, Anna L.; Fitzgerald, William F.; Hammerschmidt, Chad R.; Lamborg, Carl H.; Soerensen, Anne L.; Sunderland, Elsie M.
2012-01-01
Anthropogenic activities have enriched mercury in the biosphere by at least a factor of three, leading to increases in total mercury (Hg) in the surface ocean. However, the impacts on ocean fish and associated trends in human exposure as a result of such changes are less clear. Here we review our understanding of global mass budgets for both inorganic and methylated Hg species in ocean seawater. We consider external inputs from atmospheric deposition and rivers as well as internal production of monomethylmercury (CH3Hg) and dimethylmercury ((CH3)2Hg). Impacts of large-scale ocean circulation and vertical transport processes on Hg distribution throughout the water column and how this influences bioaccumulation into ocean food chains are also discussed. Our analysis suggests that while atmospheric deposition is the main source of inorganic Hg to open ocean systems, most of the CH3Hg accumulating in ocean fish is derived from in situ production within the upper waters (<1000 m). An analysis of the available data suggests that concentrations in the various ocean basins are changing at different rates due to differences in atmospheric loading and that the deeper waters of the oceans are responding slowly to changes in atmospheric Hg inputs. Most biological exposures occur in the upper ocean and therefore should respond over years to decades to changes in atmospheric mercury inputs achieved by regulatory control strategies. Migratory pelagic fish such as tuna and swordfish are an important component of CH3Hg exposure for many human populations and therefore any reduction in anthropogenic releases of Hg and associated deposition to the ocean will result in a decline in human exposure and risk. PMID:22559948
Mercury biogeochemical cycling in the ocean and policy implications.
Mason, Robert P; Choi, Anna L; Fitzgerald, William F; Hammerschmidt, Chad R; Lamborg, Carl H; Soerensen, Anne L; Sunderland, Elsie M
2012-11-01
Anthropogenic activities have enriched mercury in the biosphere by at least a factor of three, leading to increases in total mercury (Hg) in the surface ocean. However, the impacts on ocean fish and associated trends in human exposure as a result of such changes are less clear. Here we review our understanding of global mass budgets for both inorganic and methylated Hg species in ocean seawater. We consider external inputs from atmospheric deposition and rivers as well as internal production of monomethylmercury (CH₃Hg) and dimethylmercury ((CH₃)₂Hg). Impacts of large-scale ocean circulation and vertical transport processes on Hg distribution throughout the water column and how this influences bioaccumulation into ocean food chains are also discussed. Our analysis suggests that while atmospheric deposition is the main source of inorganic Hg to open ocean systems, most of the CH₃Hg accumulating in ocean fish is derived from in situ production within the upper waters (<1000 m). An analysis of the available data suggests that concentrations in the various ocean basins are changing at different rates due to differences in atmospheric loading and that the deeper waters of the oceans are responding slowly to changes in atmospheric Hg inputs. Most biological exposures occur in the upper ocean and therefore should respond over years to decades to changes in atmospheric mercury inputs achieved by regulatory control strategies. Migratory pelagic fish such as tuna and swordfish are an important component of CH₃Hg exposure for many human populations and therefore any reduction in anthropogenic releases of Hg and associated deposition to the ocean will result in a decline in human exposure and risk. Copyright © 2012 Elsevier Inc. All rights reserved.
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.
NASA Astrophysics Data System (ADS)
Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel
2017-04-01
Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
DC servomechanism parameter identification: a Closed Loop Input Error approach.
Garrido, Ruben; Miranda, Roger
2012-01-01
This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Liu, Liquan; Kager, René
2017-01-01
Language input is a key factor in bi-/multilingual research. It roots in the definition of bi-/multilingualism and influences infant cognitive development since and even before birth. The methods used to assess language exposure among bi-/multilingual infants vary across studies. This paper discusses the parental report patterns of the…
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
USDA-ARS?s Scientific Manuscript database
Hydrologic and water quality models are very sensitive to input parameter values, especially precipitation input data. With several different sources of precipitation data now available, it is quite difficult to determine which source is most appropriate under various circumstances. We used several ...
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
Logarithmic and power law input-output relations in sensory systems with fold-change detection.
Adler, Miri; Mayo, Avi; Alon, Uri
2014-08-01
Two central biophysical laws describe sensory responses to input signals. One is a logarithmic relationship between input and output, and the other is a power law relationship. These laws are sometimes called the Weber-Fechner law and the Stevens power law, respectively. The two laws are found in a wide variety of human sensory systems including hearing, vision, taste, and weight perception; they also occur in the responses of cells to stimuli. However the mechanistic origin of these laws is not fully understood. To address this, we consider a class of biological circuits exhibiting a property called fold-change detection (FCD). In these circuits the response dynamics depend only on the relative change in input signal and not its absolute level, a property which applies to many physiological and cellular sensory systems. We show analytically that by changing a single parameter in the FCD circuits, both logarithmic and power-law relationships emerge; these laws are modified versions of the Weber-Fechner and Stevens laws. The parameter that determines which law is found is the steepness (effective Hill coefficient) of the effect of the internal variable on the output. This finding applies to major circuit architectures found in biological systems, including the incoherent feed-forward loop and nonlinear integral feedback loops. Therefore, if one measures the response to different fold changes in input signal and observes a logarithmic or power law, the present theory can be used to rule out certain FCD mechanisms, and to predict their cooperativity parameter. We demonstrate this approach using data from eukaryotic chemotaxis signaling.
NASA Astrophysics Data System (ADS)
Zhu, Yuchuan; Yang, Xulei; Wereley, Norman M.
2016-08-01
In this paper, focusing on the application-oriented giant magnetostrictive material (GMM)-based electro-hydrostatic actuator, which features an applied magnetic field at high frequency and high amplitude, and concentrating on the static and dynamic characteristics of a giant magnetostrictive actuator (GMA) considering the prestress effect on the GMM rod and the electrical input dynamics involving the power amplifier and the inductive coil, a methodology for studying the static and dynamic characteristics of a GMA using the hysteresis loop as a tool is developed. A GMA that can display the preforce on the GMM rod in real-time is designed, and a magnetostrictive model dependent on the prestress on a GMM rod instead of the existing quadratic domain rotation model is proposed. Additionally, an electrical input dynamics model to excite GMA is developed according to the simplified circuit diagram, and the corresponding parameters are identified by the experimental data. A dynamic magnetization model with the eddy current effect is deduced according to the Jiles-Atherton model and the Maxwell equations. Next, all of the parameters, including the electrical input characteristics, the dynamic magnetization and the mechanical structure of GMA, are identified by the experimental data from the current response, magnetization response and displacement response, respectively. Finally, a comprehensive comparison between the model results and experimental data is performed, and the results show that the test data agree well with the presented model results. An analysis on the relation between the GMA displacement response and the parameters from the electrical input dynamics, magnetization dynamics and mechanical structural dynamics is performed.
The low-metallicity starburst NGC346: massive-star population and feedback
NASA Astrophysics Data System (ADS)
Oskinova, Lida
2017-08-01
The Small Magellanic Cloud (SMC) is ideal to study young, massive stars at low metallicity. The compact cluster NGC346 contains about half of all O-type stars in the entire SMC. The massive-star population of this cluster powers N66, the brightest and largest HII region in the SMC. We propose to use HST-STIS to slice NGC346 with 20 long-slit exposures, in order to obtain the UV spectra of most of the massive early-type stars of this cluster. Archival data of 13 exposures that cover already a minor part of this cluster will be included in our analyses. Our aim is to quantitatively analyze virtually the whole massive-star population of NGC346. We have already secured the optical spectra of all massive stars in the field with the integral-field spectrograph MUSE at the ESO-VLT. However, for the determination of the stellar-wind parameters, i.e. the mass-loss rates and the wind velocities, ultraviolet spectra are indispensable. Our advanced Potsdam Wolf-Rayet (PoWR) code will be used for modeling the stellar and wind spectra in the course of the analysis. Finally, we will obtain:(a) the fundamental stellar and wind parameters of all stars brighter than spectral type B2V in the field, which, e,g,, will constrain the initial mass function in this young low-metallicity starburst;(b) mass-loss rates of many more OB-type stars at SMC metallicity than hitherto known, allowing to better constrain their metallicity dependence;(c) the integrated feedback by ionizing radiation and stellar winds of the whole massive-star population of NGC346, which will be used as input to model the ecology of the giant HII region N66.These HST UV data will be of high legacy value.
Honti, Mark; Fenner, Kathrin
2015-05-19
The OECD guideline 308 describes a laboratory test method to assess aerobic and anaerobic transformation of organic chemicals in aquatic sediment systems and is an integral part of tiered testing strategies in different legislative frameworks for the environmental risk assessment of chemicals. The results from experiments carried out according to OECD 308 are generally used to derive persistence indicators for hazard assessment or half-lives for exposure assessment. We used Bayesian parameter estimation and system representations of various complexities to systematically assess opportunities and limitations for estimating these indicators from existing data generated according to OECD 308 for 23 pesticides and pharmaceuticals. We found that there is a disparity between the uncertainty and the conceptual robustness of persistence indicators. Disappearance half-lives are directly extractable with limited uncertainty, but they lump degradation and phase transfer information and are not robust against changes in system geometry. Transformation half-lives are less system-specific but require inverse modeling to extract, resulting in considerable uncertainty. Available data were thus insufficient to derive indicators that had both acceptable robustness and uncertainty, which further supports previously voiced concerns about the usability and efficiency of these costly experiments. Despite the limitations of existing data, we suggest the time until 50% of the parent compound has been transformed in the entire system (DegT(50,system)) could still be a useful indicator of persistence in the upper, partially aerobic sediment layer in the context of PBT assessment. This should, however, be accompanied by a mandatory reporting or full standardization of the geometry of the experimental system. We recommend transformation half-lives determined by inverse modeling to be used as input parameters into fate models for exposure assessment, if due consideration is given to their uncertainty.
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1990-01-01
A game theoretic controller is developed for a linear time-invariant system with parameter uncertainties in system and input matrices. The input-output decomposition modeling for the plant uncertainty is adopted. The uncertain dynamic system is represented as an internal feedback loop in which the system is assumed forced by fictitious disturbance caused by the parameter uncertainty. By considering the input and the fictitious disturbance as two noncooperative players, a differential game problem is constructed. It is shown that the resulting time invariant controller stabilizes the uncertain system for a prescribed uncertainty bound. This game theoretic controller is applied to the momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Inclusion of the external disturbance torque to the design procedure results in a dynamical feedback controller which consists of conventional PID control and cyclic disturbance rejection filter. It is shown that the game theoretic design, comparing to the LQR design or pole placement design, improves the stability robustness with respect to inertia variations.
A system performance throughput model applicable to advanced manned telescience systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1990-01-01
As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.
Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD
NASA Astrophysics Data System (ADS)
Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.
2018-05-01
In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.
NASA Astrophysics Data System (ADS)
Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.
2018-04-01
Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Nestly--a framework for running software with nested parameter choices and aggregating results.
McCoy, Connor O; Gallagher, Aaron; Hoffman, Noah G; Matsen, Frederick A
2013-02-01
The execution of a software application or pipeline using various combinations of parameters and inputs is a common task in bioinformatics. In the absence of a specialized tool to organize, streamline and formalize this process, scientists must write frequently complex scripts to perform these tasks. We present nestly, a Python package to facilitate running tools with nested combinations of parameters and inputs. nestly provides three components. First, a module to build nested directory structures corresponding to choices of parameters. Second, the nestrun script to run a given command using each set of parameter choices. Third, the nestagg script to aggregate results of the individual runs into a CSV file, as well as support for more complex aggregation. We also include a module for easily specifying nested dependencies for the SCons build tool, enabling incremental builds. Source, documentation and tutorial examples are available at http://github.com/fhcrc/nestly. nestly can be installed from the Python Package Index via pip; it is open source (MIT license).
DOE Office of Scientific and Technical Information (OSTI.GOV)
N.D. Francis
The objective of this calculation is to develop a time dependent in-drift effective thermal conductivity parameter that will approximate heat conduction, thermal radiation, and natural convection heat transfer using a single mode of heat transfer (heat conduction). In order to reduce the physical and numerical complexity of the heat transfer processes that occur (and must be modeled) as a result of the emplacement of heat generating wastes, a single parameter will be developed that approximates all forms of heat transfer from the waste package surface to the drift wall (or from one surface exchanging heat with another). Subsequently, with thismore » single parameter, one heat transfer mechanism (e.g., conduction heat transfer) can be used in the models. The resulting parameter is to be used as input in the drift-scale process-level models applied in total system performance assessments for the site recommendation (TSPA-SR). The format of this parameter will be a time-dependent table for direct input into the thermal-hydrologic (TH) and the thermal-hydrologic-chemical (THC) models.« less
NASA Technical Reports Server (NTRS)
Chapman, C. P.; Slusser, R. A.
1980-01-01
PARAMET, interactive simulation program for parametric studies of electric vehicles, guides user through simulation by menu and series of prompts for input parameters. Program considers aerodynamic drag, rolling resistance, linear and rotational acceleration, and road gradient as forces acting on vehicle.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
Exposure Parameters For Delayed Puberty And Mammary Gland Development In Long-Evans Rats Exposed In Utero To Atrazine
Jennifer L. Rayner1 and Suzanne E. Fenton2
1 UNC-Chapel Hill, DESE, Chapel Hill, NC, and 2 RTD, USEPA, NHEERL/ORD, RTP,NC
Prenatal exposure ...
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Aerts, Sam; Deschrijver, Dirk; Verloock, Leen; Dhaene, Tom; Martens, Luc; Joseph, Wout
2013-10-01
In this study, a novel methodology is proposed to create heat maps that accurately pinpoint the outdoor locations with elevated exposure to radiofrequency electromagnetic fields (RF-EMF) in an extensive urban region (or, hotspots), and that would allow local authorities and epidemiologists to efficiently assess the locations and spectral composition of these hotspots, while at the same time developing a global picture of the exposure in the area. Moreover, no prior knowledge about the presence of radiofrequency radiation sources (e.g., base station parameters) is required. After building a surrogate model from the available data using kriging, the proposed method makes use of an iterative sampling strategy that selects new measurement locations at spots which are deemed to contain the most valuable information-inside hotspots or in search of them-based on the prediction uncertainty of the model. The method was tested and validated in an urban subarea of Ghent, Belgium with a size of approximately 1 km2. In total, 600 input and 50 validation measurements were performed using a broadband probe. Five hotspots were discovered and assessed, with maximum total electric-field strengths ranging from 1.3 to 3.1 V/m, satisfying the reference levels issued by the International Commission on Non-Ionizing Radiation Protection for exposure of the general public to RF-EMF. Spectrum analyzer measurements in these hotspots revealed five radiofrequency signals with a relevant contribution to the exposure. The radiofrequency radiation emitted by 900 MHz Global System for Mobile Communications (GSM) base stations was always dominant, with contributions ranging from 45% to 100%. Finally, validation of the subsequent surrogate models shows high prediction accuracy, with the final model featuring an average relative error of less than 2dB (factor 1.26 in electric-field strength), a correlation coefficient of 0.7, and a specificity of 0.96. Copyright © 2013 Elsevier Inc. All rights reserved.
Miri, Mohammad; Alahabadi, Ahmad; Ehrampoush, Mohammad Hassan; Ghaffari, Hamid Reza; Sakhvidi, Mohammad Javad Zare; Eskandari, Mahboube; Rad, Abolfazl; Lotfi, Mohammad Hassan; Sheikhha, Mohammad Hassan
2018-06-11
The aim of this study was to assess the potential health risk of exposure to polycyclic aromatic hydrocarbons (PAHs) at home and kindergarten for pre-school children. The urine samples were taken from 200 pre-school children aged 5-7 years and analyzed for 1-OHP as a biomarker of PAHs. Mixed effect models were applied to investigate the association between effective environmental parameters (mode of transport, distance to major roads, traffic density, greenness, tobacco exposure, home ventilation, and grill foods) and urinary 1-OHP levels. A Monte-Carlo simulation technique was applied to calculate the risk of exposure to PAHs and to check the uncertainty of input variables and the sensitivity of the estimated risk. The median and inter quartile range (IQR) of 1-OHP was 257 (188.5) ng L -1 . There was a positive significant association between distance from the kindergartens to the green space with surface area ≥5000 m 2 and 1-OHP concentration (β = 0.844, 95% CI: 0.223, 1.46, P-value = 0.009). Also, urinary 1-OHP was found to be inversely associated with the time the window was open at the home (β = -12.56, 95% CI: -23.52, -1.596, P-value = 0.025) and normalized difference vegetation index (NDVI) in a 100 m buffer around the homes. The mean (9.76 E-3) and 95th percentile (3.28 E-2) of the hazard quotient (HQ) indicated that the concentration of urinary 1-OHP is at a safe level for the target population (HQ < 1). According to the sensitivity analysis results, the concentration of 1-OHP is the most influential variable in the estimated risk. Our findings indicated that the proximity of homes and kindergartens to green space areas and their remoteness from the main streets and heavy traffic areas are associated with reduced exposure to PAHs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
USDA-ARS?s Scientific Manuscript database
The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Cohen Hubal, E A; Sheldon, L S; Burke, J M; McCurdy, T R; Berry, M R; Rigas, M L; Zartarian, V G; Freeman, N C
2000-01-01
We review the factors influencing children's exposure to environmental contaminants and the data available to characterize and assess that exposure. Children's activity pattern data requirements are demonstrated in the context of the algorithms used to estimate exposure by inhalation, dermal contact, and ingestion. Currently, data on children's exposures and activities are insufficient to adequately assess multimedia exposures to environmental contaminants. As a result, regulators use a series of default assumptions and exposure factors when conducting exposure assessments. Data to reduce uncertainty in the assumptions and exposure estimates are needed to ensure chemicals are regulated appropriately to protect children's health. To improve the database, advancement in the following general areas of research is required: identification of appropriate age/developmental benchmarks for categorizing children in exposure assessment; development and improvement of methods for monitoring children's exposures and activities; collection of activity pattern data for children (especially young children) required to assess exposure by all routes; collection of data on concentrations of environmental contaminants, biomarkers, and transfer coefficients that can be used as inputs to aggregate exposure models. PMID:10856019
1987-01-16
menus , controls user and device access to the system, manages the security features associated with menus , devices, and users, provides...in the files, or the number of files in the system. 2-2 3.0 MODULE INPUT PROCESSES 3.1 Summary of Input Processes The EE module contains many menu ...Output Processes The EE module contains many menu options which enable the user to obtain needed information from the module. These options can be
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin
multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.
VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)
NASA Astrophysics Data System (ADS)
Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.
2016-02-01
Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).
10Be inventories in Alpine soils and their potentiality for dating land surfaces
NASA Astrophysics Data System (ADS)
Egli, Markus; Brandová, Dagmar; Böhlert, Ralph; Favilli, Filippo; Kubik, Peter W.
2010-05-01
To exploit natural archives and geomorphic objects it is necessary to date them first. Landscape evolution of Alpine areas is often strongly related to the activities of glaciers in the Pleistocene and Holocene. At sites where no organic matter for radiocarbon dating exists and where suitable boulders for surface exposure dating (using in situ produced cosmogenic nuclides) are absent, dating of soils could give information about the timing of landscape evolution. We explored the applicability of soil dating using the inventory of meteoric Be-10 in Alpine soils. For this purpose, a set of 6 soil profiles in the Swiss and Italian Alps was investigated. The surface at these sites had already been dated (using the radiocarbon technique or surface exposure dating using in situ produced Be-10). Consequently, a direct comparison of the ages of the soils using meteoric Be-10 and other dating techniques was made possible. The estimation of Be-10 deposition rates is subject to severe limitations and strongly influences the obtained results. We tested three scenarios using a) the meteoric Be-10 deposition rates as a function of the annual precipitation rate, b) a constant Be-10 input for the Central Alps and c) as b) but assuming a pre-exposure of the parent material. The obtained ages that are based on the Be-10 inventory in soils and on scenario a) for the Be-10 input agreed reasonably well with the expected age (obtained from surface exposure or radiocarbon dating). The ages obtained from soils using scenario b) produced mostly ages that were too old whereas the approach using scenario c) seemed to yield better results than scenario b). Erosion calculations can, in theory, be performed using the Be-10 inventory and Be-10 deposition rates. An erosion estimation was possible using scenario a) and c), but not using b). The estimated erosion rates are in a reasonable range. The dating of soils using Be-10 has several potential error sources. Analytical errors as well as errors from other parameters such as bulk soil density and soil skeleton content have to be taken into account. The error range was from 8 up to 21%. Furthermore, uncertainties in estimating Be-10 deposition rates substantially influence the calculated ages. Relative age estimates and, under optimal conditions, a numerical dating can be carried out. Age determination of Alpine soils using Be-10 gives another possibility to date surfaces when other methods fail or are not possible at all. It is, however, not straightforward, quite laborious and may consequently have some distinct limitations.
Lentic small water bodies: Variability of pesticide transport and transformation patterns.
Ulrich, Uta; Hörmann, Georg; Unger, Malte; Pfannerstill, Matthias; Steinmann, Frank; Fohrer, Nicola
2018-03-15
Lentic small water bodies have a high ecological potential as they fulfill several ecosystem services such as the retention of water and pollutants. They serve as a hot spot of biodiversity. Due to their location in or adjacent to agricultural fields, they can be influenced by inputs of pesticides and their transformation products. Since small water bodies have rarely been part of monitorings/campaigns up to now, their current exposure and processes guiding the pesticide input are not understood, yet. This study presents results of a sampling campaign of 10 lentic small water bodies from 2015 to 2016. They were sampled once after the spring application for a pesticide target screening, before autumn application and three times after rainfall events following the application. The autumn sampling focused on the herbicides metazachlor, flufenacet and their transformation products - oxalic acid and - sulfonic acid as representatives for common pesticides in the study region. The concentrations were associated with rainfall before and after application, characteristics of the site and the water bodies, physicochemical parameters and the applied amount of pesticides. The key results of the pesticide screening in spring indicate positive detections of pesticides which have not been applied for years to the single fields. The autumn sampling showed frequent occurrences of the transformation products, which are formed in soil, from 39% to 94% of all samples (n=71). Discharge patterns were observed for metazachlor with highest concentrations in the first sample after application and then decreasing, but not for flufenacet. The concentrations of the transformation products increased over time and revealed highest values mainly in the last sample. Besides rainfall patterns right after application, the spatial and temporal dissemination of the pesticides to the water bodies seems to play a major role to understand the exposure of lentic small water bodies. Copyright © 2017 Elsevier B.V. All rights reserved.
User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator
Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.
2003-01-01
BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Functional differences between statistical learning with and without explicit training
Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and prepare for incoming input. In this study, we ask whether the function of statistical learning may be enhanced through supplementary explicit training, in which underlying regularities are explicitly taught rather than simply abstracted through exposure. Learners were randomly assigned either to an explicit group or an implicit group. All learners were exposed to a continuous stream of repeating nonsense words. Prior to this implicit training, learners in the explicit group received supplementary explicit training on the nonsense words. Statistical learning was assessed through a speeded reaction-time (RT) task, which measured the extent to which learners used acquired statistical knowledge to optimize online processing. Both RTs and brain potentials revealed significant differences in online processing as a function of training condition. RTs showed a crossover interaction; responses in the explicit group were faster to predictable targets and marginally slower to less predictable targets relative to responses in the implicit group. P300 potentials to predictable targets were larger in the explicit group than in the implicit group, suggesting greater recruitment of controlled, effortful processes. Taken together, these results suggest that information abstracted through passive exposure during statistical learning may be processed more automatically and with less effort than information that is acquired explicitly. PMID:26472644
Wrapping Python around MODFLOW/MT3DMS based groundwater models
NASA Astrophysics Data System (ADS)
Post, V.
2008-12-01
Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.
Advanced Integrated Display System V/STOL Program Performance Specification. Volume I.
1980-06-01
sensor inputs required before the sensor can be designated acceptable. The reactivation count of each sensor parameter which satisfies its veri...129 3.5.2 AIDS Configuration Parameters .............. 133 3.5.3 AIDS Throughput Requirements ............... 133 4 QUALITY ASSURANCE...lists the adaptation parameters of the AIDS software; these parameters include the throughput and memory requirements of the software. 3.2 SYSTEM
Fetal programming and environmental exposures ...
Fetal programming is an enormously complex process that relies on numerous environmental inputs from uterine tissue, the placenta, the maternal blood supply, and other sources. Recent evidence has made clear that the process is not based entirely on genetics, but rather on a delicate series of interactions between genes and the environment. It is likely that epigenctic (“above the genome”) changes are responsible for modifying gene expression in the developing fetus, and these modifications can have long-lasting health impacts. Determining which epigenetic regulators are most vital in embryonic development will improve pregnancy outcomes and our ability to treat and prevent disorders that emerge later in life. “Fetal Programming and Environmental Exposures: Implications for Prenatal Care and Preterm Birth’ began with a keynote address by Frederick vom Saal, who explained that low-level exposure to endocrine disrupting chemicals (EDCs) perturbs hormone systems in utero and can have negative effects on fetal development. vom Saal presented data on the LOC bisphenol A (BPA), an estrogen-mimicking compound found in many plastics. He suggested that low-dose exposure to LOCs can alter the development process and enhance chances of acquiring adult diseases, such as breastcancer, diabetes, and even developmental disorders such as attention deficit disorder (ADHD).’ Fetal programming is an enormously complex process that relies on numerous environmental inputs
Alternatives for jet engine control
NASA Technical Reports Server (NTRS)
Sain, M. K.
1983-01-01
The technical progress of researches on alternatives for jet engine control, is reported. The principal new activities involved the initial testing of an input design method for choosing the inputs to a non-linear system to aid the approximation of its tensor parameters, and the beginning of order reduction studies designed to remove unnecessary monomials from tensor models.
USDA-ARS?s Scientific Manuscript database
The temptation to include model parameters and high resolution input data together with the availability of powerful optimization and uncertainty analysis algorithms has significantly enhanced the complexity of hydrologic and water quality modeling. However, the ability to take advantage of sophist...
Biological monitoring results for cadmium exposed workers.
McDiarmid, M A; Freeman, C S; Grossman, E A; Martonik, J
1996-11-01
As part of a settlement agreement with the Occupational Safety and Health Administration (OSHA) involving exposure to cadmium (Cd), a battery production facility provided medical surveillance data to OSHA for review. Measurements of cadmium in blood, cadmium in urine, and beta 2-microglobulin in urine were obtained for more than 100 workers over an 18-month period. Some airborne Cd exposure data were also made available. Two subpopulations of this cohort were of primary interest in evaluating compliance with the medical surveillance provisions of the Cadmium Standard. These were a group of 16 workers medically removed from cadmium exposure due to elevations in some biological parameter, and a group of platemakers. Platemaking had presented a particularly high exposure opportunity and had recently undergone engineering interventions to minimize exposure. The effect on three biological monitoring parameters of medical removal protection in the first group and engineering controls in platemakers is reported. Results reveal that both medical removal from cadmium exposures and exposure abatement through the use of engineering and work practice controls generally result in declines in biological monitoring parameters of exposed workers. Implications for the success of interventions are discussed.
A dimension-wise analysis method for the structural-acoustic system with interval parameters
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong
2017-04-01
The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.
Design and Implementation of RF Energy Harvesting System for Low-Power Electronic Devices
NASA Astrophysics Data System (ADS)
Uzun, Yunus
2016-08-01
Radio frequency (RF) energy harvester systems are a good alternative for energizing of low-power electronics devices. In this work, an RF energy harvester is presented to obtain energy from Global System for Mobile Communications (GSM) 900 MHz signals. The energy harvester, consisting of a two-stage Dickson voltage multiplier circuit and L-type impedance matching circuits, was designed, simulated, fabricated and tested experimentally in terms of its performance. Simulation and experimental works were carried out for various input power levels, load resistances and input frequencies. Both simulation and experimental works have been carried out for this frequency band. An efficiency of 45% is obtained from the system at 0 dBm input power level using the impedance matching circuit. This corresponds to the power of 450 μW and this value is sufficient for many low-power devices. The most important parameters affecting the efficiency of the RF energy harvester are the input power level, frequency band, impedance matching and voltage multiplier circuits, load resistance and the selection of diodes. RF energy harvester designs should be optimized in terms of these parameters.
NASA Astrophysics Data System (ADS)
Belleri, Basayya K.; Kerur, Shravankumar B.
2018-04-01
A computer-oriented procedure for solving the dynamic force analysis problem for general planar mechanisms is presented. This paper provides position analysis, velocity analysis, acceleration analysis and force analysis of six bar mechanism with variable topology approach. Six bar mechanism is constructed by joining two simple four bar mechanisms. Initially the position, velocity and acceleration analysis of first four bar mechanism are determined by using the input parameters. The outputs (angular displacement, velocity and acceleration of rocker)of first four bar mechanism are used as input parameter for the second four bar mechanism and the position, velocity, acceleration and forces are analyzed. With out-put parameters of second four-bar mechanism the force analysis of first four-bar mechanism is carried out.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
2010-09-01
differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from